zsh compinit: insecure directories, run compaudit for list.

If you encountered this message prompt:
zsh compinit: insecure directories, run compaudit for list.
Ignore insecure directories and continue [y] or abort compinit [n]? 
And you googled, binged, chatgpt'ed and tried everthing, and you still got it.  This post is for you.

Do this, put this line in (create or append to) this file: .zshenv
Save it and restart shell, the message should be gone.

If you on Mac, you can easily solve this by:
compaudit  # find out insecure directories
sudo chown -R yourname /usr/local/share/zsh
However, if you are in multi-user Linux environment, you may not be able to do it, and even you can do it, your message may be gone, other users will get that message.
The reason for this message is that zsh sources /etc/zsh/zshenv,  inside zshenv it run compinit.   zsh source /etc/zsh/zshrc before .zshrc, that's why even if you put `ZSH_DISABLE_COMPFIX=true`, or change other configs in zshrc, the message will still be there.
By putting `skip_global_compinit=1` in .zshenv, it will /etc/zsh/zshrc to not run compinit.  
If you use oh-my-zsh, you still need to put:
at the beginning of your .zshrc file, otherwise it gives you warnings other than that message prompt.


apt update issue: Problem parsing dependency

If you encounter this problem when doing 'apt update'

E: Problem parsing dependency 21 of libc6-dev-s390x-cross:all=2.31-0ubuntu7cross1
E: Error occurred while processing libc6-dev-s390x-cross (NewVersion2)
E: Problem with MergeList /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal_main_binary-amd64_Packages
E: The package lists or status file could not be parsed or opened.

Do these:

sudo rm -fr /var/cache/apt/archives/*

sudo dpkg --clear-avail

sudo rm -fr /var/lib/apt/lists/*

sudo apt update


Biden's 11/4 Early Morning Vote Jump in Michigan (Un-)Explained.

You've probably seen this graph and noticed the sudden jump of Biden's vote total in the early morning on 11/4 in Michigan.

You may have seen Josh Philip from EpochTime or the POTUS's videos on this jump.  (If you haven't seen them, that's because they are censored by BigTechs and main stream medias.)

In this post, I explain this jump and why it's fraudulent.

Let's focus on the "jump".

You can look at the data in your browser.  Open with Firefox:
Click "data", "races", "0", then scroll down and find and click "timeseries", then scroll down and find 465 and 466, click to open both, click "vote_shares" in both, you will see:

The jump at 466 happened at 11/4 11:31:53Z.  The time is GMT, in eastern time, it's 6:31:53am.  Notice the vote dump before it has timestamp 11:31:48Z.  That's 5 seconds before.  Edison research aggregates data from different counties, 2 counties might happen to report their data 5 seconds apart.  So the 5 second time difference is not cause for concern, unlike some pundits have said.

The number of votes in this vote dump is 4724327-4574555=149772.  Using "vote_shares" provided, Biden got: 4724327*0.485-4574555*0.47=141258 votes
Trump got: 4724327*0.498-4574555*0.513=5968 votes.

The number of votes that're calculated are not accurate because "vote_shares" are rounded to 3 decimal places.  To get accurate number of votes, we turn to another source.  This is New York Time's "votes remaining" data snapshots.  The data can be found:
By compare different snapshots of this data, we can find out which county updated its vote total and exact vote splits.
The data from Wayne County is listed here:

Notice the difference between the rows at 10:43:33Z(blue) and 11:33:20Z(orange).  Votes increased by:
607656-457884=149772.  So we know the vote dump is from Wayne county.
The exact number of votes for Biden and Trump:
Biden: 408131-268229=139902 votes
Trump: 190951-181974=8977 votes
That's 139902/149772=93.4% for Biden, and 8977/149772=6.0% for Trump.

Now the question is: Are these votes from Detroit?

From Detroit's official election result website:  Out of 250138, Biden got 233908, Trump got 12654.  It seems obvious the vote dump is from Detroit because the percentages match.  But let's make sure.
From Wayne county election result website: Out of 874018, Biden got 597170, Trump got 264553.  So from NON-Detroit precincts: out of 623880 (874018-250138) votes, Biden got 363262 (597170-233908), Trump got 251899 (264553-12654).  Percentage wise, Biden got 363262/623880=58.2%, Trump got 251899/623880=40.4%.

So from vote distribution, the 149772 vote dump could NOT come from non-Detroit Wayne County precincts.  It could Only come from City of Detroit.

But it DIDN'T.

From precinct level data snapshots of New York Times, in the timeframe between 10:23Z and 11:38Z, City of Detroit's increase in vote total is 11067.  
(The latest version of this data can be found at: https://static01.nyt.com/elections-assets/2020/data/api/2020-11-03/precincts/MIGeneralConcatenator-latest.json , but you need different snapshots of this file and calculate the difference to make sense of the data.)

Up to 10:23Z, Detroit's total is 214644, and it's already included in the 457884 figure from Wayne county.  Detroit's final total is 250138, so the 149772 number can not come from City of Detroit.

Now we have a problem: We know these 149772 votes came from Wayne County.  They can only come from Detroit and they did not come from Detroit.   They seemed to come out of thin air.

No one have provide an explanation to the Biden Jump and no medias have disputed or "fact-check"ed the fact that it's suspicious and the 149772 votes were most likely fraudulent. 


Pennsylvania Election Night Data Irregularity

This is the plot of vote counts from Pennsylvania presidential race.  The data came from Edison Research, which was used by virtually all main stream medias for vote reporting.  You can get the data here.  Open it in Firefox, then click data->races->0->timeseries.  

You may notice something strange at the beginning part of the graph.  Vote count normally only goes up because it continues accumulating as time goes by.  But in this graph, there are spike-downs as well as spikes up. What's going on?

We will  expand this weird looking part and focus on the first 55 vote dumps from Pennsylvania.

The first vote dump (numbered 1 in the plot) out of Pennsylvania has 64535 votes with Biden's share 0.799 and Trump's 0.188.

If we do the calculation, Biden got 51563 and Trump got 12132.  Because vote shares are rounded to 3 decimal places, the vote counts Biden and Trump get are not accurate, even though the total vote is accurate.  But we can actually get the accurate vote count for each candidate.  The source is also Edison/New York Times.  NYTimes has timestamped snapshot vote reports for all precincts for certain counties of Pennsylvania.  For example the first report that has any non-zero votes is:
By calculating the difference between vote counts in different timestamped reports, we get a clear picture of vote increments for each candidate for each precinct of some counties at each timestamp.  So we know the exact vote split of the first vote dump - Biden:51555, Trump:12147, and they are all absentee votes from Allegheny county.  Here's sample of 2 precincts.

Next vote dump of 21818 is from Luzerne county.  The one after that is mostly from Centre county.

The vote dump numbered 2 in the graph is from Philadelphia. It has Biden 70573 and Trump 4279.  This first vote dump out of Philly is highly unusual.  It has by far highest Biden/Trump ratio of 16.5 among all vote dumps out of Philadelphia.  

In vote dump marked 3, out of 82711 votes, around 74000 were given to Biden, around 8500 was given to Trump.  At this point, Biden has 254000 votes, Trump has only 46000.  So Biden held a margin of 200 thousands over Trump.  It looks like Biden was given a huge head start.  But is it fraud?  Yes, it is.  Because they admitted in the vote dump #4.

In vote dump #4, there are 130027 votes, almost all of them are for Trump, Biden got virtually 0.  How can this possibly be a real vote dump?  It cannot.  I think someone was afraid of getting caught because the unrealistically huge Biden margin, so they bumped Trump's number up so it didn't look so suspicious.   But as we will see, this bump would be taken away later.

In the 3 vote dumps between #4 and #5, out of 121000 votes, Biden gained 96000, Trump gained 24000.   In the vote dump marked 6, out of 113941, Biden got 89000, Trump 24000.   A large component of this dump are votes from Montgomery county (72823 votes, Biden 64777, Trump 7756).
Biden's margin at this point is again over 200 thousands.

At #7, we have a crazy vote dump of 239084, out of which 200000 are for Biden, 40000 are for Trump.  Then after more than a dozen vote dumps, in #8, exactly 239084 votes were taken out.  I guess someone realized a huge Biden:Trump 5:1 vote dump was jus too unbelievable,  better taking it back then spread them out to smaller vote dumps so they are more credible.

In #9 vote dump, 114886 votes were taken out.  But only Trump's vote take a dive,  Biden's vote actually increased.  It seems they took out the #4 Trump bump.  At this point Biden's margin increased to be over 300 thousands.

Next in #10 vote dump, a total of 507047 votes were added.  Out of them, 337K are for Biden, 146K are for Trump.  This half million vote dump is absolutely fraud!  In these votes, 173K are from Chester county.  Out of 230 Chester precincts, 203 precincts reported 675 votes with Biden 421 and Trump 233.
To verify, open:

 in Firefox, click "Raw Data", then search for "chester" a few times.

Can 203 precincts have exactly same number of votes and exact same vote splits?  Impossible!

Both 421 and 233 are prime numbers.  What's the reason for prime numbers?  If you use prime number , you are guaranteed to NOT have a whole number ratio or numbers that have a common denominator (other than 1).    I guess the fraudulent vote tabulating system was programmed to generate random numbers that approach an overall ratio.  But to avoid suspicion, it try to generate numbers that have no apparent relationship.  Unfortunately in this case, the random number generator got stuck!

How many more  of these half million votes are fraudulent?  Most of them are.  Again realized this fraud vote dump was too apparent, in #11 at 2:22GMT or 9:22PM EST, they removed a total of 586189 votes!  Which votes did they remove?  I don't know.  But I do know that those fraudulent Chester county votes were not removed.  The next Chester vote update wouldn't come until 3 hours later at 5:27GMT (or 0:27 EST).

After #11 huge vote reduction, Biden still hold 200K margin over Trump.  From this point on, Trump was able to catch up and leap over.  The "glitch" that was caught on CNN where exactly 19958 votes were removed from Trump and added to Biden's total, happened at 4:08GMT (11:08pm EST).  As Trump continued gain momentum, at 2am EST in the morning he had a 700K margin over Biden.  Then came the big nation wide coordinated "halt", and vote dumps afterward that miraculously narrow the gap.  They have everyone's attention.

But I think just like the one in Maricopa county, the huge early Biden head-start in Pennsylvania need more attention.  The first 50 vote dumps out of Pennsylvania were complete mess.  Some of them got cleaned up,  but a good number of fraudulent votes are still there and in the final total.  They should be investigated and taken out.   The 421:233 Chester votes are still there.   The 75K Biden/Trump ratio 16.5 Philly votes are still there.  The 73K B/T 8:1 ratio Montgomery votes are still there.  The 30K 4:1 Centre votes are still there.  If even half of 200K early fraudulent Biden margin is taken out, Trump would win Pennsylvania (Biden's margin is 82K)


Healthy lifestyle and life expectancy

This is a summary of the Harvard study:
Healthy lifestyle and life expectancy free of cancer, cardiovascular disease, and type 2 diabetes: prospective cohort study https://www.bmj.com/content/368/bmj.l6669
In 3 sentences: 
At age 50, life expectancy free of cancer, cardiovascular disease, and diabetes was 23.7 (95% confidence interval 22.6 to 24.7), 26.4 (25.2 to 27.4), 29.1 (28.0 to 30.0), 31.8 (30.8 to 32.8), and 34.4 (33.1 to 35.5) years among women who adopted zero, one, two, three, and four or five low risk lifestyle factors, respectively. 
Life expectancy free of cancer, cardiovascular disease, and diabetes at age 50 was 23.5 (22.3 to 24.7), 24.8 (23.5 to 26.0), 26.7 (25.3 to 27.9), 28.4 (26.9 to 29.7), and 31.1 (29.5 to 32.5) years among men who adopted zero, one, two, three, and four or five low risk lifestyle factors, respectively. 
The percentage of life expectancy free of cancer, cardiovascular disease, and diabetes from total life expectancies was 74.8%, 77.6%, 80.1%, 82.2%, and 83.6% among women (75.3%, 75.8%, 76.8%, 77.9%, and 79.0% among men) who adopted zero, one, two, three, and four or five low risk lifestyle factors, respectively.
The 5 low risk lifestyle factors are:
1. high Alternate Healthy Eating Index - higher the better
2. no smoking - more smoking the worse
3. moderate activity - more vigorous the better
4. alcohol - none only better than heavy drinking, best is moderate drinking, surprise!
5. low BMI - lower the better until too low
I'm a little disappointed with these findings.  If you do everything right, you're only expected to live 31.1-23.5=7.6 years longer and only 4.2% more free of cancer, cvd, and diabetes.

Keep drinking!


Extracting video from blob url

First press F12, then select Network tab. reload page, then play video.  You should see a file called playlist.m3u8.  Right click it then copy address.

On a command line(Win, Mac, Linux), type youtube-dl followed by pasting the m3u8 link, then enter.

Of course you should already have youtube-dl installed.

If above doesn't work, you can download the m3u8 file. Look inside the m3u8 file, if it contains relative links to ts file for example: 

You need to replace this relative link with absolute link by adding hostname, for example, if you downloaded the m3u8 file from https://example.com/, you need to insert 'https://example.com' before every ts lines, so the above will be like:
You need to do this to all lines that contain ".ts".

Save the file, then:
ffmpeg -protocol_whitelist file,http,https,tcp,tls -allowed_extensions ALL -i playlist.m3u8 -bsf:a aac_adtstoasc -c copy out.mp4


Disable ubuntu update manager, timer from command line

On command lines:
sudo systemctl disable apt-daily.timer
sudo systemctl disable apt-daily-upgrade.timer
then edit 2 files:
sudo vi /etc/apt/apt.conf.d/10periodic
sudo vi /etc/apt/apt.conf.d/20auto-upgrades
In both files, change "1" to "0".


Creating Time Lapsed video from dashcam videos

Time lapsed video are usually create from photos.  Here's how to create one from dashcam videos.
First move all videos into a single folder, then list files and redirect output into a text file.  On linux, it's `ls > files.txt`, on Windows, it's `dir /B > files.txt`.

Then edit this file.  This file should list videos in the order of timestamp, if not, adjust them manually. Remove any files that's not video or any videos that you don't want to be included in the final video.  Put 'file ' at the beginning of every line.  (In Vim, you can do this by :%s/^/file /g)

Then create a raw video by select one frame in every 30 frames from all the videos in the file:

ffmpeg -f concat -i files.txt -vf "select=not(mod(n\,30)),setpts=N/(FRAME_RATE*TB)" -vcodec rawvideo -pix_fmt yuv420p -an raw.yuv
Because this is uncompressed raw video, its size will be huge, probably tens of GigBytes. You probably don't want to play it on your computer, let alone uploaded to youtube.  To make it playable, we need to compress it to mp4 format:

ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 1920x1080 -i raw.yuv -vcodec libx264 video.mp4
 You can see an example of the time lapsed video here:


Tornado vs Starlette in 2019

For the last 10 years I have been using mainly  Tornado as my web framework of choice.  And I mostly use it synchronously.  Only when dealing with uploading files did i use some asynchronous tornado features.   Most of my apps have database backends.  With database involved, it's not worth it to convert my code to asynchronous mode.  Tornado has been worked well for me I have no intention of stopping using it in the near future.  But I do want to keep my eyes open for new tool that enables me to build fast app fast.

In the last few years, there are several new async web frameworks.  One of them is Starlette, I decided to evaluate it.  The first step is to compare it basic performance with Tornado.

Starlette, using uvicorn, claims to be one of the fastest python web framework.  I copied the codes directly from its website:
from starlette.responses import PlainTextResponse

async def app(scope, receive, send):
    assert scope['type'] == 'http'
    response = PlainTextResponse('Hello, world!')
    await response(scope, receive, send)
I run it with "uvicorn example:app", then measure rps using 'ab':
ab -n 1000 -c 10
 The result is on average 4200 requests per second.

I then run basic Tornado app, again with code copied directly from tornado website:
import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, world")

def make_app():
    return tornado.web.Application([
        (r"/", MainHandler),

if __name__ == "__main__":
    app = make_app()
 Using the same measurement, the ab result is a average 1900 requests/second.

So it looks like Starlette/uvicorn is more than 2 times faster than Tornado.   However, the code on Tornado website is actually not the best for production use.  Just replace "app.listen(8888)" the second to last line with following:
server = HTTPServer(app)
(and put "from tornado.httpserver import HTTPServer" at the beginning of the file),  its performance increases to 4700 requests/second, actually faster than Starlette.

The code change to Tornado make the app start 4 processes instead of 1.  Of course we can also do the similar for the Starlette app:
uvicorn --workers 4 example:app
The result now is 7500 requests/second, surpassing multi-process Tornado again.

These are superficial results that don't mean much.  But it did make me appreciate the works that got into Tornado that keep it performant in these years.

Result Summary:
  • Tornado (default): 1900 rps
  • Startlette/uvicorn (default): 4200 rps
  • Tornado (4 workers): 4700 rps
  • Starlette/uvicorn (4 workers): 7500 rps


DrJava Font Size Problem

I'm helping my son learning Java. One of the first things to do when learning Java is picking an IDE. I like DrJava for its simplicity and small size. However, when I first run DrJava, its font size is way too small on my 4K monitor.

There are 2 ways to fix this - the easy way and the hard way.

The easy way first.
Click Edit:Preferences, then click "Display Options", change "Look and Feel" to the one that ends with "Plastic3DLookandFeel".  Do this on Windows only, for Mac and Linux, default "Look and Feel" is fine.
Click "Font" under "Display Options", change all fonts to double the original size:
Press "OK", then close DrJava then open it again.  Now the fonts should be big enough to read.

(The changes you make are actually saved in a configuration file .drjava in your home directory,  you can edit the file directly for changes, but it's not recommended)

Now the hard way.
For the "easy way", we just changed DrJava's default configuration preferences.  The hard way is to compile a DrJava program with these changes already made in the source code so we don't have to change preferences.

Clone the DrJava repository on github: https://github.com/DrJavaAtRice/drjava

Try to compile it first:
cd drjava/drjava
ant jar

You must have ant already installed.  If it's compiled OK, try run it:
java -jar drjava.jar
(You can just double click drjava.jar file too)

Open src/edu/rice/cs/drjava/config/OptionConstants.java for editing:
Change "Monaco-12" to "Monaco-24"; change "Monospaced-12" to "Monospaced-24";
chanage "dialog-10" to "dialog-20"; change "dialog-12" to "dialog-24".

Save the file then recompile (ant jar).  Now DrJava is "pre-configured" with big fonts.

You might say this "hard way" is pointless.  Why on earth would anyone want to do this? I agree.  But just maybe one wants to provide a "pre-configured" copy of DrJava to his students.  I just show one way to do it.  Another way is to write a .drjava configuration file upon installation.  This is way more complex. Besides, I'm using a standalone jar file, there's no installation to speak of.