MegaMillions Experiment – Wrap Up – It was a Silly Experiment

Well, we have the data from all 5 drawings in this silly experiment and it is unimpressive. Here’s the raw results for each set of numbers in the 5 drawings.

Number Set8/168/198/238/278/30
Family NumbersNo matchesNo MatchesNo MatchesTwo 
Matches
No Matches
“Hot” Numbers (=>3x in last 20 draws)No matchesNo MatchesNo MatchesOne 
Match
No Matches
“Cold” Numbers (=1x in last 50 draws)One matchNo MatchesNo MatchesTwo 
Matches
No Matches
Quick PickNo matchesOne matchNo MatchesNo
Matches
One Match

Let’s take the sum of these to see if we can see any differences in the way we chose numbers and the results.

Number SetTotal MatchesDifference from Average% of all numbers drawn (30 total)Dollars Won
Family Numbers2No Difference6.6%$0
“Hot” Numbers (=>3x in last 20 draws)1-13.3%$0
“Cold” Numbers (=1x in last 50 draws)3+110%$4
Quick Pick2No Difference6.6%$0
Average Total Matches2

There is a small argument to be made that cold numbers did get drawn more often, but I would put this into the area of “too close to call.” It’s not a material difference in my mind. To be a material difference I would have liked to see the dollars won be higher. We could have matched three numbers in the cold draws on different days and still not have won any, so the odds of us winning (1 in 89) were just “luck of the draw” at best.

If you are really interested in these probability type problems, I recommend checking out Kevin at the Vsauce 2 channel on YouTube. I particularly like his “The Perfect Illegal Lottery” video that talks about the history of the daily numbers.

My friend Ryan pointed out early on that this is doomed to fail based on the “Gamblers Fallacy” and he’s 100% correct. Let’s think of it on a simpler basis, flipping a coin. Assuming it’s a “fair” coin over time it should come up heads half the time and tails half the time. Just because it flipped “heads” 5 times doesn’t mean the odds on the 6th flip are greater than 50/50 that it will come up heads because the coin has no memory. So there are really no “hot” or “cold” numbers in the lottery.

Here’s some background on how the lottery drawing is executed. From working at a TV station that helped put on a state lottery drawing back in the day I know there are random sets of balls used, it’s not the same set in every drawing. There’s not only one drawing either, there are practice drawings held before the real thing. There are procedures done to weigh the balls and make sure they all weigh the same. Even the camera crew is vetted prior to the drawings, so you can’t sneak someone in.

So, there is no ‘best way” to pick your numbers for MegaMillions. Does all of this mean I’ll quit playing the lottery when it gets big? Probably not, because there is still a chance and all you need it $2 and a dream.

MegaMillions Experiment – Draw 4

Cold numbers give a return on investment!

Continuing on with the silly experiment I am doing by picking “hot” and “cold” numbers in the MegaMillions drawing. Had a few more numbers hit on Friday:

Number Set8/168/198/238/27
Family NumbersNo matchesNo MatchesNo MatchesTwo Matches
“Hot” Numbers (=>3x in last 20 draws)No matchesNo MatchesNo MatchesOne Match
“Cold” Numbers (=1x in last 50 draws)One matchNo MatchesNo MatchesTwo Matches
Quick PickNo matchesOne matchNo MatchesNo
Matches

Cold numbers are still ahead after four drawings but not a material difference in my mind. This drawing did make some money back! One of the “Cold” number matches is the Megaball, so we have won 10% of the $40 investment in the 5 draws.

Final draw is Tuesday night. Read about the previous drawings here: Drawing 1 | Drawing 2 & 3.

Lunar Eavesdropping, Curious Marc, and Tracking Artemis 1 Via RF

Originally published 27-August-22 at 5:00 PM ET
Updated 28-August-22 at 12:25 PM ET

Apollo 16 Command Module “Casper” at Space and Rocket Center in Huntsville, AL – Picture by N4BFR

Back in the days of Apollo, hams were listening in to Apollo 11 and other space flights. An article entitled “Lunar Eavesdropping” from the ARRL talks about how a couple of hams listened in on Apollo 11, and outside of getting a 10 second head start on the rest of the world, made it work but didn’t hear anything unique.

Over the last few months I’ve watched as Curious Marc on YouTube, known as AJ6JV in Ham Radio circles, has completely reconstructed signals from the Apollo gear in his basement. It’s been fascinating to understand the “RF Black Arts” as he says, of the really complex ways the capsules and LEM sent back their signals in a multiplexed method of FM and PM data. One thing I noted that this frequency range was around 2.287 GHz, in the “S-Band” frequency range.

NASA has had 50 years to be more sophisticated with it’s communications. It now breaks down comms into their “Near Space Network” which handles the communications around the globe, and the “Deep Space Network” which handles communications beyond earth orbit.

I’ve spend the last few hours going through documents on the NSN from as far back as 2000 and from what I can tell, they still primarily use S-Band frequencies. Here’s an example of frequencies listed in a doc related to Wallops Island, VA – part of the NSN.

11.3M Antenna (WGS 11.3M)
  TX 2.025 Ghz to 2.120 GHz 
  RX 2.220-2.400 GHz (S-Band)
     8.025-8.400 GHz (X-Band)
  Modes: PM, FM, BPSK, QPSK, 

4.7M Antenna (LEO-T)
  TX 2.025-2.210 GHz
  RX 2.200-2.300 GHz

So, there may be something there to trying to pick up some Artemis 1 or NSN signals while it’s in near Earth orbit on Monday. I do have some omni-directional 2 GHz receive coverage with an SDR. My mission would be to capture anything unique and record it for later analysis, though I expect it to be encrypted, since SpaceX encrypted their telemetry feeds after hams started to listen in.

About 2.5 hours after launch, Artemis 1 will start toward the moon and switch to the DSN according to NASA PR data. (Update: On the 8/27 media conference they mentioned this was right after the TLI event and could be 90 minutes in to the mission) DSN also has S-Band communications but needs to use X-Band or higher at least part of the time according to this JPL document:

X-Band, K and Ka-Bands are out of my range at the moment, but I will be checking in on the S-Band segment from time to time. If I was communicating, I probably wouldn’t be turning radios on and off, but instead looking at simulcasting the streams.

So, lots of fun to be had this week as I begin a look to peek at space comms! If you have something to add or share, hit me up on Twitter @N4BFR or on Facebook.

MegaMillions Experiment – Draws 2 and 3

Continuing on with the silly experiment I am doing by picking “hot” and “cold” numbers in the MegaMillions drawing. I was out of town for the Friday drawing so let’s catch up on drawings 2 and 3, and we’ll keep drawing 1 as a reference:

Number Set8/168/198/23
Family NumbersNo matchesNo MatchesNo Matches
“Hot” Numbers (=>3x in last 20 draws)No matchesNo MatchesNo Matches
“Cold” Numbers (=1x in last 50 draws)One matchNo MatchesNo Matches
Quick PickNo matchesOne matchNo Matches

After one drawing it looked like the cold numbers would be the ones to beat, but Quick Pick jumped on the board in drawing number 2 with a match. Tuesday was a bad drawing for the lot, with no matches to the array of 24 numbers.

I did note something interesting that I had not picked up on when creating this. The “Family” number and “Hot” number set share a selection. So do the “Cold” numbers and the Quick Pick.

One additional thought I had is that if I had truly wanted to stick with “hot” and “cold” numbers for each drawing I should have reset after each drawing, but this method will work for our simple experiment.

MegaMillions Experiment – Draw 1 of 5

I outlined the silly experiment I am doing by picking “hot” and “cold” numbers in the MegaMillions drawing. Here are the results from the August 16 draw:

Number SetDrawing 1
Family NumbersNo matching numbers
“Hot” Numbers (=>3x in last 20 draws)No matching numbers
“Cold” Numbers (=1x in last 50 draws)One matching number
Quick PickNo matching numbers

So, the cold numbers at least got on the board but there’s no real advantage to any of the combinations after the first draw.

“Experimenting” with the Mega Millions

Adam Savage Meme

A famous modern philosopher once said:

“The only difference between screwing around and science is writing it down.”

– Adam Savage, Mythbusters 2012

So here’s the writing down to document something fun I am trying with Mega Millions.

During the big run-up to the Billion dollar jackpot, my Mom asked me to play some family numbers, which, in the grand scheme of things, have as good a chance as any other if you follow probability. It got me to thinking if I can give probablity a little nudge so I am putting $40 into an experiment. I have picked 4 sets of numbers for Mega Millions:

  • Mom’s Family Numbers
  • The 6 “hottest” numbers from the last 20 drawings according to an online site. These are numbers that have been drawn between 3 and 5 times in the last 20 drawings. So, would these numbers “stay hot” and give me a better chance of winning.
  • The 6 “coldest” numbers in the last 50 drawings according to the same online site. All of these have only been drawn 1 time in the last 25 weeks, So would these numbers “catch up” to their probability of being drawn.
  • A standard quick pick, which should be a psudo-random set of numbers.

In order to give these a bit more of a chance, I have purchased 5 drawings for each set of numbers, so we’ll see what happens after 2.5 weeks.

Why I embrace the leap second as a symbol of our imperfect earth, and think tech can find another way.

Clock showing EDT and UTC

In the last week or so, the tech industry has announced, “time is hard, we don’t want to manage to it.” Actually what they have done is attack the concept of the leap second which is designed to keep atomic time (known as UTC) in sync with solar time. A leap second is an adjustment of the “civil” time standard, just like a leap day adjusts the calendar. The earth does not perfectly rotate 24 hours every day, so on occasion a leap second will be added. The last adjustment was in 2017. Here’s what it looked and sounded like, just an extra tick at 23:59:59:

I was first alerted to this discussion on This Week in Google #674 and their takeaway seemed to be, “either way it’s not a big deal.” Adding 1 second in the last 5 years, no big deal to the average person and that seems right in may ways. The discussion was triggered by the tech industry’s view outlined in this Facebook Engineering article titled, It’s time to leave the leap second in the past. The article, as I summarize it, says, “look at all the ways the tech industry has screwed up leap seconds, wouldn’t it be better for us if they went away.”

I encourage you to look at the whole article on the Facebook page, but if you didn’t I have highlighted a couple of their issues and supplemented that with a few counter arguments to big tech’s talking points:

“This periodic adjustment mainly benefits scientists and astronomers”
Doesn’t it really help the community at large? What the leap second does is keep Coordinated Universal Time (UTC) in line with solar time. 12 Noon in UTC is astronomical noon, not 11:59:57 or 12:00:03. The second is a standard measurement, 9,192,631,770 vibrations of the cesium 133 atom. As Wikipedia says, “A unit of time is any particular time interval, used as a standard way of measuring or expressing duration. ” Solar midnight to solar midnight has been the standard for the day for millennia, and it makes sense we would adjust that for the earth’s varied rotation in our time measurement.

” …these days UTC is equally bad for both digital applications and scientists, who often choose TAI or UT1 instead.
I don’t know if I agree that they are equally bad for scientists, but let’s focus on a “civil” day being solar midnight to midnight. UTC maintains accuracy to the solar day.
TAI and UT1 are exactly what the tech companies are arguing for, a time standard that does not incorporate leap seconds. The site Time and Date explains it well. TAI is “International Atomic Time” the synchronization of hundreds of atomic clocks. TAI never adds leap seconds, it is based on continuous counting since January 1, 1958. That means it is 37 seconds different than UTC based on 10 leap seconds added in 1972 and 27 leap seconds since. UT1 is a time standard based on the earths rotation which also has a fixed calculation. Just to be complete, there is also GPS time which started at 0 on January 6, 1980 and counts continuously. It is ahead by 18 seconds.

Here’s a table that shows the differences at Midnight UTC in London

Time StandardTime Indicated at 0:00 UTC
Coordinated Universal Time
(UTC)
00:00:00
(12 Midnight)
GPS Time00:00:18
(12:00:18 AM)
International Atomic Time
(TAI)
00:00:37
(12:00:37 AM)
Solar Time
(UT1 as of 30-Apr-22)
23:59:59.9025
(11:59:59.9025 PM)

Why is this important to the tech industry?
Every time you post on Twitter, make a stock trade or send an email it is time stamped. Every time stamp tells Facebook which post is the newest or the energy plant when to start the generators for the next wave of power distribution. Accurate time IS important, by no doubt.

Does this need a change to leap seconds?
I say no, and here’s why.

1) This is a software problem. It CAN be fixed in code. Having an entire new second show up every 18 months or more can be a hassle, and a random one at that since it’s inconsistent. Google and Amazon already have a solution called the Leap Smear. Instead of adding 1 second at 23:59:59 UTC on June 30 or December 31, they take the whole day to add the second in very small increments of 11.6 PPM. So their time is off of UTC time by no more than the accumulation of this smear across a half day, never more than +/- 0.5 seconds. While doing that, it keeps the standard of Solar Midnight being UTC midnight.

2) The tech giants can use the other time scales. If they desire to have a time standard that always goes forward, never back, and never smeared, they can use the existing TAI or GPS time. There is not a conglomeration of networking for this standard like there is with NTP, but there could be. Major multi-billion revenue tech companies like Google, Amazon, and Facebook absolutely can afford the time and resources to make this their standard.

Let’s look at GPS time as the option. The standard is already in place, floating above our heads and many many places use that tech for time coordination today. I even use it in my own home.

Here’s a screen shot of my NTP server that uses GPS for time synchronization. The time data here is a super accurate standard that costs me less than $100 to add. This clock is accurate to 2^-20 seconds (see precision=-20 above) which equates to about 1/1,000,000 of a second. One-Millionth-of-a-second accuracy, in place today. Basing “server time” or whatever you want to call it on GPS time would probably be trivial because, again, it’s a software change. Do you really know or care if your Instagram post says it was posted at 1:34:18 instead of 1:34:00? Unlikely.

We already have a “leap” standard.
Just like the earth doesn’t quite revolve in exactly 24 hours around it’s axis, it’s doesn’t revolve around the sun every 365 days. It’s more like 365.24 days to make a year. We have a way to handle changes to Earth’s imperfect rotation and it’s called a leap day on a leap year. Just ask all those February 29th babies who should be 40 but claim to be 10.

After all the analysis, here’s my recap.
It’s wonderful that technology is in a place where we can measure time by counting the vibrations of an atom. Our Earth is not actually that precise though, so that imperfection trickles down to our days and years. Our time and date standards are well thought-out to align to the solar changes in both. We should maintain UTC against the solar “civil” day, not the arbitrary day that counts against a cesium atom.

Instead of the tech companies making long term fixes to their software, either by truly creating programming that would support a leap second or switching to a linear standard, they want to take the easy route and change the UTC standard. I believe we should keep the UTC standard aligned with UT1 (solar time) and that requires an occasional adjustment, just like a leap year. The great thing about the internet is it was built to define it’s own standards. If Meta, Google, Amazon, and others find their needs are different they can join together, create an Internet Standard via the RFC process convince their peers to adopt it go to town. That’s why it’s there, so let’s ask the tech companies to focus in their own universe if they need a different standard.

While they consider that, I hope you will join me in embracing the imperfections in our planet and solar system, support leap second and leave UTC alone.

I’m glad to have feedback. Tweet me @n4bfr with your thoughts.


Follow up: 8/2 at 3:20 PM – Found this tweet from @qntm with a tool to use TAI on Unix. Again, it’s a software problem.

The Wednesday Morning Crash Bug

I have a problem with my computer that is stumping me. The first time I start it on Wednesday mornings, it runs about 8 minutes and then it locks up. Screen freezes, no input is capable from the mouse or keyboard. Once I restart the computer it will happily run for another 6 days and 23 hours, then it’s back to lockup.

Last month I started my bug chase in earnest. I scrubbed event logs and can’t really find a source. I get unhelpful responses like this when I look at event viewer:

My assumption is that some process is phoning home on a schedule, then attempting to do something and locking up at that time. My most likely villain is Window Microsoft Defender because I see WMI running around the same time and that seems to be part of how the home version manages it. I looked into changing the time for update checking, but that seems to be restricted to enterprise versions on the MS documentation I have read.

So, it’s at a place where I can live with it. I know it’s going to happen and when, so I can plan for it. It’s just annoying and seems like there should be a way to solve this. Some solutions I have ruled out:

  • I am not switching to Linux or another OS. I have things I need to access that will only run on Windows.
  • I am not switching anti-virus software. I can only imaging that will make it worse, not better.

If you have any thoughts on this, please send me a tweet to @N4BFR on Twitter and help with the conversation.

How Long for Long Pi – Part 4 – Bring out the Raspberries

In the 4th post in this series (find post 3 with Win-tel stats here) I broke out the Raspberry Pi collection to see how this device has changes over the generations. I can say for sure it only gets better.

In 4 generations the Pi performance has improved from almost 19 hours to calculate Pi to 1 Million Places to just under 4 hours. That’s a 80% performance improvement in 8 years. Now the price has gone up, the Pi 4 as I have it was $75 vs the $25 of the Pi 1, but 4 times faster for less 3 times the price over those same 8 years is amazing to me.

I take the Pi 0 W results with a grain of salt because that’s supposed to be a smaller, less powerful board. But it costs $10 new. If you want to compare them by their different SOC’s Wikipedia has a great article that has all the specs.

I’m still a big Raspberry Pi fan and I probably could do more with a single Pi than I do, but I have a dozen in this room and they just crank along like magic. I don’t have one favorite project but one that you might want to read about is my Flip Dot Clock.

Today I have some older Apple machines on the bench. I’ll share those results tomorrow.

How Long for Long Pi – Part 3 – Intel Machines

Following up on my How Long for Long Pi posts from Thursday and Friday I wanted to start quantifying the data a little more. I was able to gather data from 5 Intel computers. They are:

CPU TypeCPU GenerationForm Factor
Intel i712th GenerationHomebuilt Desktop
Intel i710th GenerationLegion Laptop
Intel i78th GenerationYoga Laptop
Intel i77th GenerationAsus ROG Desktop
Intel i77th GenerationYoga Laptop

My testing methodology was to run 3 passes of the test calculating Pi to 100,000 places via a Python 3 script (see Friday’s post) and then to 1,000,000 places. All 5 machines ran native Windows. (The Legion Laptop and 7th Gen Yoga are Win 10, the others Windows 11). On 4 of those machines I could dual boot into Ubuntu 20 Linux from a USB SSD drive.

I was surprised that the Ubuntu versions averaged 13.7% faster calculation time than the Windows machines. I don’t have the details to drill down further to attribute that to the overhead of the operating system or to efficiency of the versions of Python 3.10 or just something else entirely.

When you spend more time calculating, you find an even bigger difference in times. Ubuntu calculations were 28.3% faster at this length of calculation. The other thing that jumped out at me in both calculations was that a 7th Gen Desktop was faster than an 8th Gen Laptop (4.3% on Ubuntu 1 Million) While all these processors are multi-core, it appeared to me that all these were running in a single processor while calculating, using Task Manager on Windows and htop on Ubuntu.

Testing of all 5 versions of Raspberry Pi (0W, 1, 2, 3, 4) are underway. I’ve also got some other devices to test, and I was surprised to see a handheld device outperform one of the 7th Gen Intel machines at a Million places.

Again, please feel free to provide comments on my Twitter – @N4BFR.