MegaMillions Experiment – Draw 1 of 5

I outlined the silly experiment I am doing by picking “hot” and “cold” numbers in the MegaMillions drawing. Here are the results from the August 16 draw:

Number SetDrawing 1
Family NumbersNo matching numbers
“Hot” Numbers (=>3x in last 20 draws)No matching numbers
“Cold” Numbers (=1x in last 50 draws)One matching number
Quick PickNo matching numbers

So, the cold numbers at least got on the board but there’s no real advantage to any of the combinations after the first draw.

“Experimenting” with the Mega Millions

Adam Savage Meme

A famous modern philosopher once said:

“The only difference between screwing around and science is writing it down.”

– Adam Savage, Mythbusters 2012

So here’s the writing down to document something fun I am trying with Mega Millions.

During the big run-up to the Billion dollar jackpot, my Mom asked me to play some family numbers, which, in the grand scheme of things, have as good a chance as any other if you follow probability. It got me to thinking if I can give probablity a little nudge so I am putting $40 into an experiment. I have picked 4 sets of numbers for Mega Millions:

  • Mom’s Family Numbers
  • The 6 “hottest” numbers from the last 20 drawings according to an online site. These are numbers that have been drawn between 3 and 5 times in the last 20 drawings. So, would these numbers “stay hot” and give me a better chance of winning.
  • The 6 “coldest” numbers in the last 50 drawings according to the same online site. All of these have only been drawn 1 time in the last 25 weeks, So would these numbers “catch up” to their probability of being drawn.
  • A standard quick pick, which should be a psudo-random set of numbers.

In order to give these a bit more of a chance, I have purchased 5 drawings for each set of numbers, so we’ll see what happens after 2.5 weeks.

Why I embrace the leap second as a symbol of our imperfect earth, and think tech can find another way.

Clock showing EDT and UTC

In the last week or so, the tech industry has announced, “time is hard, we don’t want to manage to it.” Actually what they have done is attack the concept of the leap second which is designed to keep atomic time (known as UTC) in sync with solar time. A leap second is an adjustment of the “civil” time standard, just like a leap day adjusts the calendar. The earth does not perfectly rotate 24 hours every day, so on occasion a leap second will be added. The last adjustment was in 2017. Here’s what it looked and sounded like, just an extra tick at 23:59:59:

I was first alerted to this discussion on This Week in Google #674 and their takeaway seemed to be, “either way it’s not a big deal.” Adding 1 second in the last 5 years, no big deal to the average person and that seems right in may ways. The discussion was triggered by the tech industry’s view outlined in this Facebook Engineering article titled, It’s time to leave the leap second in the past. The article, as I summarize it, says, “look at all the ways the tech industry has screwed up leap seconds, wouldn’t it be better for us if they went away.”

I encourage you to look at the whole article on the Facebook page, but if you didn’t I have highlighted a couple of their issues and supplemented that with a few counter arguments to big tech’s talking points:

“This periodic adjustment mainly benefits scientists and astronomers”
Doesn’t it really help the community at large? What the leap second does is keep Coordinated Universal Time (UTC) in line with solar time. 12 Noon in UTC is astronomical noon, not 11:59:57 or 12:00:03. The second is a standard measurement, 9,192,631,770 vibrations of the cesium 133 atom. As Wikipedia says, “A unit of time is any particular time interval, used as a standard way of measuring or expressing duration. ” Solar midnight to solar midnight has been the standard for the day for millennia, and it makes sense we would adjust that for the earth’s varied rotation in our time measurement.

” …these days UTC is equally bad for both digital applications and scientists, who often choose TAI or UT1 instead.
I don’t know if I agree that they are equally bad for scientists, but let’s focus on a “civil” day being solar midnight to midnight. UTC maintains accuracy to the solar day.
TAI and UT1 are exactly what the tech companies are arguing for, a time standard that does not incorporate leap seconds. The site Time and Date explains it well. TAI is “International Atomic Time” the synchronization of hundreds of atomic clocks. TAI never adds leap seconds, it is based on continuous counting since January 1, 1958. That means it is 37 seconds different than UTC based on 10 leap seconds added in 1972 and 27 leap seconds since. UT1 is a time standard based on the earths rotation which also has a fixed calculation. Just to be complete, there is also GPS time which started at 0 on January 6, 1980 and counts continuously. It is ahead by 18 seconds.

Here’s a table that shows the differences at Midnight UTC in London

Time StandardTime Indicated at 0:00 UTC
Coordinated Universal Time
(12 Midnight)
GPS Time00:00:18
(12:00:18 AM)
International Atomic Time
(12:00:37 AM)
Solar Time
(UT1 as of 30-Apr-22)
(11:59:59.9025 PM)

Why is this important to the tech industry?
Every time you post on Twitter, make a stock trade or send an email it is time stamped. Every time stamp tells Facebook which post is the newest or the energy plant when to start the generators for the next wave of power distribution. Accurate time IS important, by no doubt.

Does this need a change to leap seconds?
I say no, and here’s why.

1) This is a software problem. It CAN be fixed in code. Having an entire new second show up every 18 months or more can be a hassle, and a random one at that since it’s inconsistent. Google and Amazon already have a solution called the Leap Smear. Instead of adding 1 second at 23:59:59 UTC on June 30 or December 31, they take the whole day to add the second in very small increments of 11.6 PPM. So their time is off of UTC time by no more than the accumulation of this smear across a half day, never more than +/- 0.5 seconds. While doing that, it keeps the standard of Solar Midnight being UTC midnight.

2) The tech giants can use the other time scales. If they desire to have a time standard that always goes forward, never back, and never smeared, they can use the existing TAI or GPS time. There is not a conglomeration of networking for this standard like there is with NTP, but there could be. Major multi-billion revenue tech companies like Google, Amazon, and Facebook absolutely can afford the time and resources to make this their standard.

Let’s look at GPS time as the option. The standard is already in place, floating above our heads and many many places use that tech for time coordination today. I even use it in my own home.

Here’s a screen shot of my NTP server that uses GPS for time synchronization. The time data here is a super accurate standard that costs me less than $100 to add. This clock is accurate to 2^-20 seconds (see precision=-20 above) which equates to about 1/1,000,000 of a second. One-Millionth-of-a-second accuracy, in place today. Basing “server time” or whatever you want to call it on GPS time would probably be trivial because, again, it’s a software change. Do you really know or care if your Instagram post says it was posted at 1:34:18 instead of 1:34:00? Unlikely.

We already have a “leap” standard.
Just like the earth doesn’t quite revolve in exactly 24 hours around it’s axis, it’s doesn’t revolve around the sun every 365 days. It’s more like 365.24 days to make a year. We have a way to handle changes to Earth’s imperfect rotation and it’s called a leap day on a leap year. Just ask all those February 29th babies who should be 40 but claim to be 10.

After all the analysis, here’s my recap.
It’s wonderful that technology is in a place where we can measure time by counting the vibrations of an atom. Our Earth is not actually that precise though, so that imperfection trickles down to our days and years. Our time and date standards are well thought-out to align to the solar changes in both. We should maintain UTC against the solar “civil” day, not the arbitrary day that counts against a cesium atom.

Instead of the tech companies making long term fixes to their software, either by truly creating programming that would support a leap second or switching to a linear standard, they want to take the easy route and change the UTC standard. I believe we should keep the UTC standard aligned with UT1 (solar time) and that requires an occasional adjustment, just like a leap year. The great thing about the internet is it was built to define it’s own standards. If Meta, Google, Amazon, and others find their needs are different they can join together, create an Internet Standard via the RFC process convince their peers to adopt it go to town. That’s why it’s there, so let’s ask the tech companies to focus in their own universe if they need a different standard.

While they consider that, I hope you will join me in embracing the imperfections in our planet and solar system, support leap second and leave UTC alone.

I’m glad to have feedback. Tweet me @n4bfr with your thoughts.

Follow up: 8/2 at 3:20 PM – Found this tweet from @qntm with a tool to use TAI on Unix. Again, it’s a software problem.

The Wednesday Morning Crash Bug

I have a problem with my computer that is stumping me. The first time I start it on Wednesday mornings, it runs about 8 minutes and then it locks up. Screen freezes, no input is capable from the mouse or keyboard. Once I restart the computer it will happily run for another 6 days and 23 hours, then it’s back to lockup.

Last month I started my bug chase in earnest. I scrubbed event logs and can’t really find a source. I get unhelpful responses like this when I look at event viewer:

My assumption is that some process is phoning home on a schedule, then attempting to do something and locking up at that time. My most likely villain is Window Microsoft Defender because I see WMI running around the same time and that seems to be part of how the home version manages it. I looked into changing the time for update checking, but that seems to be restricted to enterprise versions on the MS documentation I have read.

So, it’s at a place where I can live with it. I know it’s going to happen and when, so I can plan for it. It’s just annoying and seems like there should be a way to solve this. Some solutions I have ruled out:

  • I am not switching to Linux or another OS. I have things I need to access that will only run on Windows.
  • I am not switching anti-virus software. I can only imaging that will make it worse, not better.

If you have any thoughts on this, please send me a tweet to @N4BFR on Twitter and help with the conversation.

How Long for Long Pi – Part 4 – Bring out the Raspberries

In the 4th post in this series (find post 3 with Win-tel stats here) I broke out the Raspberry Pi collection to see how this device has changes over the generations. I can say for sure it only gets better.

In 4 generations the Pi performance has improved from almost 19 hours to calculate Pi to 1 Million Places to just under 4 hours. That’s a 80% performance improvement in 8 years. Now the price has gone up, the Pi 4 as I have it was $75 vs the $25 of the Pi 1, but 4 times faster for less 3 times the price over those same 8 years is amazing to me.

I take the Pi 0 W results with a grain of salt because that’s supposed to be a smaller, less powerful board. But it costs $10 new. If you want to compare them by their different SOC’s Wikipedia has a great article that has all the specs.

I’m still a big Raspberry Pi fan and I probably could do more with a single Pi than I do, but I have a dozen in this room and they just crank along like magic. I don’t have one favorite project but one that you might want to read about is my Flip Dot Clock.

Today I have some older Apple machines on the bench. I’ll share those results tomorrow.

How Long for Long Pi – Part 3 – Intel Machines

Following up on my How Long for Long Pi posts from Thursday and Friday I wanted to start quantifying the data a little more. I was able to gather data from 5 Intel computers. They are:

CPU TypeCPU GenerationForm Factor
Intel i712th GenerationHomebuilt Desktop
Intel i710th GenerationLegion Laptop
Intel i78th GenerationYoga Laptop
Intel i77th GenerationAsus ROG Desktop
Intel i77th GenerationYoga Laptop

My testing methodology was to run 3 passes of the test calculating Pi to 100,000 places via a Python 3 script (see Friday’s post) and then to 1,000,000 places. All 5 machines ran native Windows. (The Legion Laptop and 7th Gen Yoga are Win 10, the others Windows 11). On 4 of those machines I could dual boot into Ubuntu 20 Linux from a USB SSD drive.

I was surprised that the Ubuntu versions averaged 13.7% faster calculation time than the Windows machines. I don’t have the details to drill down further to attribute that to the overhead of the operating system or to efficiency of the versions of Python 3.10 or just something else entirely.

When you spend more time calculating, you find an even bigger difference in times. Ubuntu calculations were 28.3% faster at this length of calculation. The other thing that jumped out at me in both calculations was that a 7th Gen Desktop was faster than an 8th Gen Laptop (4.3% on Ubuntu 1 Million) While all these processors are multi-core, it appeared to me that all these were running in a single processor while calculating, using Task Manager on Windows and htop on Ubuntu.

Testing of all 5 versions of Raspberry Pi (0W, 1, 2, 3, 4) are underway. I’ve also got some other devices to test, and I was surprised to see a handheld device outperform one of the 7th Gen Intel machines at a Million places.

Again, please feel free to provide comments on my Twitter – @N4BFR.

How Long for Long Pi – Part 2

In a previous blog post I considered how I might benchmark performance of different computers to understand how they compare across processor generations and maybe in the future across major architectures.

After experimenting with some different Python code, I found a version that is very consistent in it’s performance, seems to run on 1 core of a multi-core processor and can run on Windows and Linux. Here’s the version I am using for calculating Pi to 100K. I sourced it from this Stack Overflow thread.

#-*- coding: utf-8 -*-

# Author:    Fatih Mert Doğancan
# Date:      02.12.2014

# Timer Integration 18-Jun-22 by Jim Reed

# Timer function Start
import time

start = time.time()
print("Dogancan - Machin 100,000 Digits Pi Calculation Start")

#Original Calculation Code goes here

def arccot(x, u):
    sum = ussu = u // x
    n = 3
    sign = -1
    while 1:
        ussu = ussu // (x*x)
        term = ussu // n
        if not term:
        sum += sign * term
        sign = -sign
        n += 2
    return sum

def pi(basamak):
    u = 10**(basamak+10)
    pi = 4 * (4*arccot(5,u) - arccot(239,u))
    return pi // 10**10

if __name__ == "__main__":
    print (pi(100000)) # 100000

# calculation code code ends
# timer reports

end = time.time()
print("Dogancan - Machin 100,000 digits elapsed calculation time")

I expect to share all my raw data as I get it more in shape, but I am definitely getting some good first impressions. Let’s look at a summary of the tests on 5 machines so far, running Pi to 100K places using the code above on Python in a command prompt / terminal shell.

My PC NamePC TypeOSPi to 100K in X Seconds
(Avg 3 Runs)
TelstarRaspberry Pi 3Raspbian148.395
EdisonRaspberry Pi 4 8GBRaspbian111.263
TeslaIntel i7-7th Gen DesktopWin 1112.997
TeslaIntel i7-7th Gen DesktopUbuntu 2010.960
Charlie DukeIntel i7-8th Gen LaptopWin 1113.342
Charlie DukeIntel i7-8th Gen LaptopUbuntu 2011.627
MarconiIntel i7-12th Gen DesktopWin 116.152
MarconiIntel i7-12th Gen DesktopUbuntu 205.352

No surprise here on machine power. The more powerful the machine, the faster it processed. Now, I don’t think I have enough samples or data to draw a strong conclusion, but on the machines where I could run Ubuntu and Windows, Ubuntu outperformed Windows by at least 12% when averaged across the three runs.

Now let’s step it up an order of magnitude. How long will it take these machines to calculate Pi to 1 Million places. I used the same Python script, just changed the variable. Note on this run because of the long run times, I only ran the Raspberry Pi tests ONCE, the 3 other PC’s show an average of 3x runs.

My PC NamePC TypeOSPi to 1 Million in HH:MM:SS
TelstarRaspberry Pi 3 (1 Run)Raspbian5:12:57
EdisonRaspberry Pi 4 8GB (1 Run)Raspbian3:42:01
TeslaIntel i7-7th Gen DesktopWin 110:26:08
TeslaIntel i7-7th Gen DesktopUbuntu 200:18:21
Charlie DukeIntel i7-8th Gen LaptopWin 110:27:44
Charlie DukeIntel i7-8th Gen LaptopUbuntu 200:19:11
MarconiIntel i7-12th Gen DesktopWin 110:12:25
MarconiIntel i7-12th Gen DesktopUbuntu 200:08:57

One of the really cool pieces of data was the difference in the Marconi runs on Ubuntu 20 was 0.14 seconds from high to low.

The difference in the Windows vs Ubuntu really stood out this time. Here’s the 3 machines data individually:
Charlie Duke was 30.46% faster with Ubuntu
Tesla was 29.78% faster with Ubuntu
Marconi was 28.84% faster with Ubuntu

So, ultimately, I don’t know if this will mean anything to anyone but me, however I am enjoying this so far. Next steps:

  • Complete Household Data Gathering – Will run on Pi 1 and Pi 2, a 10th Gen Intel Laptop and a 2105 Mac Mini
  • Publish my complete data set.
  • Understand if I can port this calculation. Ultimately I’d love to try one of the old museum Cray machines to see if I can add those to the scoreboard.

If you have comments or thoughts for me on this, you can tweet me @N4BFR.

How Long for Long Pi?

Note: This is less of a blog post and more of a running commentary on a project I have conceived. I have a long way to go on it but I hope you enjoy the journey.

I’ve been thinking about computers I have seen at places like The National Museum of Computing in the UK or the Computer Museum of America here in Metro-Atlanta. One of the things that has always challenged me is how to benchmark computers against each other. For instance, we know the Cray 1A at the CMoA had 160 Megaflops of computing power, while a Raspberry Pi 4 has 13,500 Megaflops of computing power according to the University of Maine. What can you do with a megaflop of power however? How does that translate in the real world.

I’m considering a calculation matrix that would use one of two metrics. For older computers, how many places of Pi can they calculate in X amount of time. Say 100 seconds? For newer computers, how long does it take for the machine to calculate Pi to 16 Million places. Here are my early examples:

Pi to 10,000 Places on Raspberry Pi

ComputerProcessorRAMElapsed TimeHow Calculated
Raspberry Pi Model 3ARM SomethingSomething6 Min 34 Sec
394 Seconds
BC #1 (Raspbian)
Raspberry Pi Model 3ARM SomethingSomething2 Min 15 Sec
135 Seconds
BC #2
Raspberry Pi
Model 3
ARM Something0 Min 0.1 SecPi command

Pi to 16,000,000 Places

ComputerProcessorRAMPi to 16M Places TimeHow Calculated
Lenovo Yoga 920Intel Core i7-8550U CPU @ 1.8 GHz16 GB9 Min 55 Sec
595 Seconds
SuperPi for Windows Version 1.1
Lenovo Yoga 920Intel Core i7-8550U CPU @ 1.8 GHz16 GB0 Min 23 SecPi command
N4BFR Vision DesktopIntel Core i7-12700K CPU @ 3.6 GHz32 GB3 Min 15 Sec
195 Seconds
SuperPi for Windows Version 1.1
Raspberry Pi Model 3B+ARM 7 Rev 4 (V71)1 GB6 Min 03 Sec
363 Seconds
Pi command

Tools I am considering to use will be an issue because I want consistent performance across operating systems. Efficiency will be an issue because I will want something that computes at roughly the same speed for windows as for Unix.

  • SuperPi for Windows 1.1 was the first I came across and it seemed to be pretty straightforward that would run on many versions of Windows I came across.
  • Moving on to a calculator I could use in Unix, I found this John Cook Consulting Website that had a couple of calculations using the BC program. I found the results inconsistent on the Lenovo Yoga 920
BC Calculation 1: time bc -l <<< "scale=10000;4*a(1)"

BC Calculation 2: time bc -l <<< "scale=10000;16*a(1/5) - 4*a(1/239)"

I then found the Pi command on pi that might be more consistent with what I need.

$ time pi 10000

Pi Calculations on Lenovo Yoga 920
Windows time is reported by SuperPi. BC time is “Real” time reported by process.

Pi Calculated to X Places. X=Windows TimeBC `BC 2Pi Command
10K (Pi Compairison)1 Min 45 Sec0 Min 32 Sec
0 Min 35 Sec
0.09 Sec
20 K3 Min 22 Sec0.
50KIncomplete after 15 minutes
128K0 Min 01 SecIncomplete after 60 Minutes
512K0 Min 08 Sec
1 M0 Min 16 Sec
8 M3 Min 05 Sec
16 M9 Min 55 Sec0 Min 23 Sec

So using BC as a method of calculating does not seem to scale.

Coming back to this a few days later, I may have a partial solution. This will limit the use of this on older machines, but should be fairly consistent with newer ones. I plan to do the calculation with a script in Python 3. This should allow for roughly similar performance on the same machine to make results more comperable.

Python3 Downloads:

Python3 methods for calculating Pi:

I was able to get a rudimentary calculation in Windows using both of the formulas and include a function to time the process consistently. Now I need to compare in Linux and blow out the calculation to allow a material number of places for this to be an effective measure.

I have found a few more options thanks to StackOverflow and I’m testing them now on my 12th Gen Intel machine.

  • 100,000 digits of Pi using the “much faster” method proposed by Alex Harvey: 177.92 seconds for the first pass, 177.83 seconds for the second pass. I like the consistency
  • Guest007 proposed an implementation using the Decimal library. I attempted a 10,000 digit calculation and that took 24.6 seconds, 100,000 places didn’t complete after more than 10 minutes. Interestingly, a peek at the system processing said it was only running 8.1% of CPU time.

Tomorrow I’ll start a new chart comparing these two methods across multiple machines.

Researching Sgt. Clemett Harrison Saint

Sgt. C.H. Saint’s gift from the town of Horden for being awarded the Military Medal during WW1.

I’m enjoying looking more and more into my family history and today I am spending a few minutes on C. H. Saint who hailed at one time from the village of Horden, England in County Durham. Great Granddad Saint was given the Military Medal in 1918.

Here’s what has to say about Sgt. Saint

Born in Marsden Colliery, Durham, England on abt 1890 to John Thomas Richardson and Dora Harrison. Clemett Harrison Saint married Rose A Salmen and had 1 child. He passed away on 21 Mar 1937 in West Hartlepool, Durham, England.


I found this at the UK National Archives site. It appears he fought in Egypt during the war, in the British Army’s Durham Light Infantry.

I’m hoping to visit Hoden in the fall to see what else I might find out.

Telstar and Callsign Curiosity

Note: Initial post of this article was around 5 PM on April 20, 2022. I corrected the post around 6:20 to reflect the proper call sign.

In case you didn’t know, Telstar was the first satellite to do communications between 2 continents. It launched in June 1962 and lasted less than 9 months.

YouTube was nice enough to suggest this Periscope Film called “Behind the Scenes with Telstar.”

This left me with a few questions:

At 27:08 in the video the tech says “sending station identification” and you hear in Morse what appears to be DE KF2XBR.

Correction from initial post: I found a second video where you can hear the Morse Code and it’s clearer now. The call sign is KF2XCK as found in the linked video from AT&T Tech Channel.

I don’t know that satellites to this day that satellites have had their own callsigns, so I’m assuming this is the ground station call sign. That ground station was in Andover, Maine. (An additional Bell Labs Telstar video confirms at least the DE KF portion of the call.)

This raised a couple of questions for me. If it was a communications service, why didn’t it have a XXX#### type call that seems to have been given out at the time?

Why was if KF2*** when Maine is in the 1 call sign area? My guess is that KF2XBR would have been assigned to Bell Labs, and that would have been coordinated out of their New Jersey HQ. I looked at the 1961 and 1963 Call Books, but there are no K*2X* stations listed.

I’ll be doing more research but if I am to believe Wikipedia, all experimental call signs, not just amateur, were in this **#X** format.

I did find a later use of KF2XBR as part of a BellSouth permit granted by the FCC in 1990. These look like cellular telephone frequencies.

From reading through these FCC proceedings, it might say that these experimental calls were given out sequentially instead of by call region, because many of the calls listed were KF2X** calls.

An interesting fact I found when reading was that the US accidentally nuked the satellite after a high altitude nuclear test. Scientific American documented how the Starfish Prime test impacted Telstar, which launched a day later.