Where did my LAN based websites go?

I documented in an earlier blog post that I recently switched my daily driver computer from a 12 Gen i7 Windows PC to a new Mac Mini M4 Pro. It’s been a week and all is working as well as I expected. I can attend conference calls using my ATEM Mini and my Scarlet 2i2 audio as source devices. I’m working with Google Docs and Photos and Videos on the web with no issues at all. I’ve even got the slicing software running for the 3D Printer to churn out some stuff.

All that is good, but there was one thing I couldn’t figure out. I have an extensive group of websites I need to access on my LAN. These include the web control pages for my NAS, Flight Aware, even a small switch I use for studio lights. Here was the issue. I couldn’t access any of them via the Google Chrome browser. I could ping the devices. I could access them via Safari. So, why can’t I see my LAN websites via Google Chrome?

I was poking around and poking around, until I finally found the answer on Reddit. It seems that security permissions were keeping Chrome from accessing items on my “local network.” Once I found that, I’m rocking along.

I thought it would be helpful to show others where to find this if they need it. This is a change in “System Settings” in MacOS. It’s not a Chrome specific setting. Here are the screens to follow in MacOS Sequoia 15.1:

  1. In System Settings, select “Privacy & Security”

2. In Privacy & Security select “Local Network”

3. In Local Network, flip the switch to on. Not sure why I have two Google Chrome’s listed, but so it goes.

I am really thankful for the folks on Reddit to have shared this answer. I hope this post will pay-it-forward a bit. You can find me on social media if you have any comments. Just search my callsign – N4BFR – and you’ll find me!

How Long for Long Pi – Part 4

In previous blog installments, I’ve tracked how long it takes for my computer to calculate an extended number of digits of Pi as a performance metric. I called it How Long for Long Pi. There is a new computer to rule the ham shack today. It’s a Mac Mini M4 Pro with 48 GB of RAM. I decided to compare it to my most recent machine tests and see how it does.

I broke out my Python script from last time and started doing test runs. Let’s do a basic comparison to my I7 12th Gen PC with 32 GB of RAM. Those were the top performers in my previous experiment.

Machine (Times are 3 run avg)Pi to 100KPi to 1MNotes
i7 Desktop with Win 11
12700A CPU
6.2 Sec12 Min 35.0 Sec
i7 Desktop with Ubuntu
12700A CPU
5.4 Sec08 Min 57.2 Sec
Mac Mini M4 Pro4.47 Sec07 Min 27.0 SecExecuted with Py Editor
from Apple App Store

By this benchmark alone, the new Mac Mini M4 Pro is processing at 16.7% faster than a 2022 generation Intel desktop running Linux. How about 40.8% better than Win 11?

I’ll add some general observations. The Mini doesn’t seem like it’s breaking a sweat. It’s warm to the touch but not overly so. The fan has not come on. I’m typing this blog post while the calculations are happening, so it’s even got a little bit of a handicap.

I did have to make one change to the code, adding a line to allow for a larger string. The version I used is below. Let me know on my socials if you try this.

#-*- coding: utf-8 -*-

# Author:    Fatih Mert Doğancan
# Date:      02.12.2014

# Timer Integration and string modification 0-Nov-2024 by Jim Reed

# Timer function Start
import time

start = time.time()
print("Dogancan - Machin 1,000,000 Digits Pi Calculation Start")

#Original Calculation Code goes here

sys.set_int_max_str_digits(1100000)

def arccot(x, u):
    sum = ussu = u // x
    n = 3
    sign = -1
    while 1:
        ussu = ussu // (x*x)
        term = ussu // n
        if not term:
            break
        sum += sign * term
        sign = -sign
        n += 2
    return sum

def pi(basamak):
    u = 10**(basamak+10)
    pi = 4 * (4*arccot(5,u) - arccot(239,u))
    return pi // 10**10

if __name__ == "__main__":
    print (pi(1000000)) # 10000


# calculation code code ends
# timer reports

end = time.time()
print("Dogancan - Machin 1,000,000 digits elapsed calculation time")
print(end-start)


iPhone 14 Pro and Apple Watch 8 Review

iPhone 14 and Apple Watch 8

(10-October-22)
Let’s start off with a true confession. I have GAS. Gear Acquisition Syndrome. I like to get new stuff and try it out. However, neither of these items are particularly new. So, why get the iPhone 14 Pro and the Apple Watch 8 now? Let’s take them in parts.

Back when the pandemic first started, I had an iPhone 10 (or is it X, I’m still not sure what I am supposed to say). Unfortunately something happened with it’s near field communications and I needed that. Not “I really want to have it” but “I have health related devices that use it, so I need it fixed.” Because COVID, I couldn’t get good help from Apple, the phone support sent me to the store, and the store sent me to phone support. You know the basic support circle-j***. I threw up my hands and got a Google Pixel 5 as a replacement.

Now, I like the Pixel 5. Fine phone with a really good camera. I think the Android platform lacks some of the fit and finish of Apple’s IOS, but nothing that was a real deal breaker for me. If it wasn’t for a couple of things I would have been content staying with Android. In fact, the transition from Apple to Android was much easier than the transition back. More on that in a moment.

What sold me on going back to Apple in general and the iPhone 14 Pro in particular were the emergency communications tools and the camera. As a ham radio operator I probably understand the limitations of wireless better than most people, but even then I have been let down by all the carriers while road tripping in places like South Georgia or the Blue Ridge Mountains. While I always seem to find a way in an emergency, I don’t like knowing I might go hours without coverage. The emergency messaging via satellite will help me fill in the gaps and give me peace of mind when I am on the road or in the mountains and that’s a huge value to me.

The other item I mentioned is photography, and I like to take pictures and videos so all types of changes in those areas get my attention. As I mentioned I am big on travel and one of the things I have been trying to do is reduce my load. When I go to the mountains for pictures I typically take a DSLR with tripods and computers to back up SD cards and it’s a lot of stuff. I felt that with the new camera – 48 Megapixels, lots of shooting modes and options, plus a much smaller footprint – I could break free of my DSLR. With a trip to England coming up, hitting 8 areas in 10 days, I wanted to keep my load low and this will help. The picture quality is very good versus the Pixel 5, not that the P5 is bad at all. See my first impressions blog post for a bakeoff. This article on PetaPixel gets into the upgrade benefits.

So, now you know why I made the switch. The how was painful, but it’s one time pain. Some brief takeaways:

– With the switch TO Android there was a nice tool to make the migration with a custom cable that connected the 2 devices. No cable here and I couldn’t even get the phones to talk to each other despite an app to promised to do that very thing.

– My wireless provider is AT&T Prepaid and they were not prepared to handle this type of conversion. The iPhone 14 Pro only uses and eSim while the Pixel 5 uses a physical one. I was without service for about 6 hours while I was sent from store to phone and almost back to store before a manager in Chat support saved me. I hope my experience became a support article so others don’t go through that pain.

Let’s talk a little about the watch. I had an Apple Watch 3 and it was fine. I didn’t feel like it was a critical device for me, and actually handed it down to a family member because I am more of a fan of mechanical watches. I did try a couple of Android watches, one from Samsung and one inexpensive knockoff. I wasn’t impressed and didn’t really integrate them into my lifestyle.

In the gap of 5 versions however, Apple has focused more on health apps and I have become more focused on my health. It was time to give the watch another try. A few week after getting the phone I went to West Farms Mall outside of Hartford and shopped the Apple store. My biggest question was, did I want to go with the Apple Watch Ultra or the Series 8. As much as I have that GAS I admitted earlier, I couldn’t bring myself to spend the extra $300 on the Ultra watch. First, I didn’t like the size. While I am OK with a big watch, that particular one just seemed very thick. Second, I didn’t need cellular connectivity on my watch. I don’t get separated from my phone that often that I need additional access, and I don’t want to pay the monthly vig for the privilege. Now in fairness, I don’t know if cellular activation is required, but it’s on more thing to break. So, I went with the base Series 8.

So far I am really pleased with all the integrations on the Series 8. Sleep tracking, exercise apps, health apps, controlling podcasts from the phone in my pocket, all good things so far. I also like the batter life. I charge it while in the shower and it runs most of the day without issues. Some nice watch faces too with different complications. That’s an area I want to explore more as I go.

So, outside of the computer (a custom built Windows PC with a bug that is fading) I am all in on Apple again. I’m not feeling like an Apple fanboy, just a user. One of the biggest lessons for me over the last year or so is that you may as well shop for the features you want and just be prepared to put in the time to fight with support, because no company these days is looking to have world class support.

The iPhone and watch are headed out on their first long road trip. I’ll update on performance if there is something significant to share. Thanks for reading and if you have any thoughts on this, please send me a tweet to @N4BFR on Twitter and help with the conversation.

Why I embrace the leap second as a symbol of our imperfect earth, and think tech can find another way.

Clock showing EDT and UTC

In the last week or so, the tech industry has announced, “time is hard, we don’t want to manage to it.” Actually what they have done is attack the concept of the leap second which is designed to keep atomic time (known as UTC) in sync with solar time. A leap second is an adjustment of the “civil” time standard, just like a leap day adjusts the calendar. The earth does not perfectly rotate 24 hours every day, so on occasion a leap second will be added. The last adjustment was in 2017. Here’s what it looked and sounded like, just an extra tick at 23:59:59:

I was first alerted to this discussion on This Week in Google #674 and their takeaway seemed to be, “either way it’s not a big deal.” Adding 1 second in the last 5 years, no big deal to the average person and that seems right in may ways. The discussion was triggered by the tech industry’s view outlined in this Facebook Engineering article titled, It’s time to leave the leap second in the past. The article, as I summarize it, says, “look at all the ways the tech industry has screwed up leap seconds, wouldn’t it be better for us if they went away.”

I encourage you to look at the whole article on the Facebook page, but if you didn’t I have highlighted a couple of their issues and supplemented that with a few counter arguments to big tech’s talking points:

“This periodic adjustment mainly benefits scientists and astronomers”
Doesn’t it really help the community at large? What the leap second does is keep Coordinated Universal Time (UTC) in line with solar time. 12 Noon in UTC is astronomical noon, not 11:59:57 or 12:00:03. The second is a standard measurement, 9,192,631,770 vibrations of the cesium 133 atom. As Wikipedia says, “A unit of time is any particular time interval, used as a standard way of measuring or expressing duration. ” Solar midnight to solar midnight has been the standard for the day for millennia, and it makes sense we would adjust that for the earth’s varied rotation in our time measurement.

” …these days UTC is equally bad for both digital applications and scientists, who often choose TAI or UT1 instead.
I don’t know if I agree that they are equally bad for scientists, but let’s focus on a “civil” day being solar midnight to midnight. UTC maintains accuracy to the solar day.
TAI and UT1 are exactly what the tech companies are arguing for, a time standard that does not incorporate leap seconds. The site Time and Date explains it well. TAI is “International Atomic Time” the synchronization of hundreds of atomic clocks. TAI never adds leap seconds, it is based on continuous counting since January 1, 1958. That means it is 37 seconds different than UTC based on 10 leap seconds added in 1972 and 27 leap seconds since. UT1 is a time standard based on the earths rotation which also has a fixed calculation. Just to be complete, there is also GPS time which started at 0 on January 6, 1980 and counts continuously. It is ahead by 18 seconds.

Here’s a table that shows the differences at Midnight UTC in London

Time StandardTime Indicated at 0:00 UTC
Coordinated Universal Time
(UTC)
00:00:00
(12 Midnight)
GPS Time00:00:18
(12:00:18 AM)
International Atomic Time
(TAI)
00:00:37
(12:00:37 AM)
Solar Time
(UT1 as of 30-Apr-22)
23:59:59.9025
(11:59:59.9025 PM)

Why is this important to the tech industry?
Every time you post on Twitter, make a stock trade or send an email it is time stamped. Every time stamp tells Facebook which post is the newest or the energy plant when to start the generators for the next wave of power distribution. Accurate time IS important, by no doubt.

Does this need a change to leap seconds?
I say no, and here’s why.

1) This is a software problem. It CAN be fixed in code. Having an entire new second show up every 18 months or more can be a hassle, and a random one at that since it’s inconsistent. Google and Amazon already have a solution called the Leap Smear. Instead of adding 1 second at 23:59:59 UTC on June 30 or December 31, they take the whole day to add the second in very small increments of 11.6 PPM. So their time is off of UTC time by no more than the accumulation of this smear across a half day, never more than +/- 0.5 seconds. While doing that, it keeps the standard of Solar Midnight being UTC midnight.

2) The tech giants can use the other time scales. If they desire to have a time standard that always goes forward, never back, and never smeared, they can use the existing TAI or GPS time. There is not a conglomeration of networking for this standard like there is with NTP, but there could be. Major multi-billion revenue tech companies like Google, Amazon, and Facebook absolutely can afford the time and resources to make this their standard.

Let’s look at GPS time as the option. The standard is already in place, floating above our heads and many many places use that tech for time coordination today. I even use it in my own home.

Here’s a screen shot of my NTP server that uses GPS for time synchronization. The time data here is a super accurate standard that costs me less than $100 to add. This clock is accurate to 2^-20 seconds (see precision=-20 above) which equates to about 1/1,000,000 of a second. One-Millionth-of-a-second accuracy, in place today. Basing “server time” or whatever you want to call it on GPS time would probably be trivial because, again, it’s a software change. Do you really know or care if your Instagram post says it was posted at 1:34:18 instead of 1:34:00? Unlikely.

We already have a “leap” standard.
Just like the earth doesn’t quite revolve in exactly 24 hours around it’s axis, it’s doesn’t revolve around the sun every 365 days. It’s more like 365.24 days to make a year. We have a way to handle changes to Earth’s imperfect rotation and it’s called a leap day on a leap year. Just ask all those February 29th babies who should be 40 but claim to be 10.

After all the analysis, here’s my recap.
It’s wonderful that technology is in a place where we can measure time by counting the vibrations of an atom. Our Earth is not actually that precise though, so that imperfection trickles down to our days and years. Our time and date standards are well thought-out to align to the solar changes in both. We should maintain UTC against the solar “civil” day, not the arbitrary day that counts against a cesium atom.

Instead of the tech companies making long term fixes to their software, either by truly creating programming that would support a leap second or switching to a linear standard, they want to take the easy route and change the UTC standard. I believe we should keep the UTC standard aligned with UT1 (solar time) and that requires an occasional adjustment, just like a leap year. The great thing about the internet is it was built to define it’s own standards. If Meta, Google, Amazon, and others find their needs are different they can join together, create an Internet Standard via the RFC process convince their peers to adopt it go to town. That’s why it’s there, so let’s ask the tech companies to focus in their own universe if they need a different standard.

While they consider that, I hope you will join me in embracing the imperfections in our planet and solar system, support leap second and leave UTC alone.

I’m glad to have feedback. Tweet me @n4bfr with your thoughts.


Follow up: 8/2 at 3:20 PM – Found this tweet from @qntm with a tool to use TAI on Unix. Again, it’s a software problem.

The Wednesday Morning Crash Bug

I have a problem with my computer that is stumping me. The first time I start it on Wednesday mornings, it runs about 8 minutes and then it locks up. Screen freezes, no input is capable from the mouse or keyboard. Once I restart the computer it will happily run for another 6 days and 23 hours, then it’s back to lockup.

Last month I started my bug chase in earnest. I scrubbed event logs and can’t really find a source. I get unhelpful responses like this when I look at event viewer:

My assumption is that some process is phoning home on a schedule, then attempting to do something and locking up at that time. My most likely villain is Window Microsoft Defender because I see WMI running around the same time and that seems to be part of how the home version manages it. I looked into changing the time for update checking, but that seems to be restricted to enterprise versions on the MS documentation I have read.

So, it’s at a place where I can live with it. I know it’s going to happen and when, so I can plan for it. It’s just annoying and seems like there should be a way to solve this. Some solutions I have ruled out:

  • I am not switching to Linux or another OS. I have things I need to access that will only run on Windows.
  • I am not switching anti-virus software. I can only imaging that will make it worse, not better.

If you have any thoughts on this, please send me a tweet to @N4BFR on Twitter and help with the conversation.

How Long for Long Pi – Part 4 – Bring out the Raspberries

In the 4th post in this series (find post 3 with Win-tel stats here) I broke out the Raspberry Pi collection to see how this device has changes over the generations. I can say for sure it only gets better.

In 4 generations the Pi performance has improved from almost 19 hours to calculate Pi to 1 Million Places to just under 4 hours. That’s a 80% performance improvement in 8 years. Now the price has gone up, the Pi 4 as I have it was $75 vs the $25 of the Pi 1, but 4 times faster for less 3 times the price over those same 8 years is amazing to me.

I take the Pi 0 W results with a grain of salt because that’s supposed to be a smaller, less powerful board. But it costs $10 new. If you want to compare them by their different SOC’s Wikipedia has a great article that has all the specs.

I’m still a big Raspberry Pi fan and I probably could do more with a single Pi than I do, but I have a dozen in this room and they just crank along like magic. I don’t have one favorite project but one that you might want to read about is my Flip Dot Clock.

Today I have some older Apple machines on the bench. I’ll share those results tomorrow.

How Long for Long Pi – Part 3 – Intel Machines

Following up on my How Long for Long Pi posts from Thursday and Friday I wanted to start quantifying the data a little more. I was able to gather data from 5 Intel computers. They are:

CPU TypeCPU GenerationForm Factor
Intel i712th GenerationHomebuilt Desktop
Intel i710th GenerationLegion Laptop
Intel i78th GenerationYoga Laptop
Intel i77th GenerationAsus ROG Desktop
Intel i77th GenerationYoga Laptop

My testing methodology was to run 3 passes of the test calculating Pi to 100,000 places via a Python 3 script (see Friday’s post) and then to 1,000,000 places. All 5 machines ran native Windows. (The Legion Laptop and 7th Gen Yoga are Win 10, the others Windows 11). On 4 of those machines I could dual boot into Ubuntu 20 Linux from a USB SSD drive.

I was surprised that the Ubuntu versions averaged 13.7% faster calculation time than the Windows machines. I don’t have the details to drill down further to attribute that to the overhead of the operating system or to efficiency of the versions of Python 3.10 or just something else entirely.

When you spend more time calculating, you find an even bigger difference in times. Ubuntu calculations were 28.3% faster at this length of calculation. The other thing that jumped out at me in both calculations was that a 7th Gen Desktop was faster than an 8th Gen Laptop (4.3% on Ubuntu 1 Million) While all these processors are multi-core, it appeared to me that all these were running in a single processor while calculating, using Task Manager on Windows and htop on Ubuntu.

Testing of all 5 versions of Raspberry Pi (0W, 1, 2, 3, 4) are underway. I’ve also got some other devices to test, and I was surprised to see a handheld device outperform one of the 7th Gen Intel machines at a Million places.

Again, please feel free to provide comments on my Twitter – @N4BFR.

How Long for Long Pi?

Note: This is less of a blog post and more of a running commentary on a project I have conceived. I have a long way to go on it but I hope you enjoy the journey.

I’ve been thinking about computers I have seen at places like The National Museum of Computing in the UK or the Computer Museum of America here in Metro-Atlanta. One of the things that has always challenged me is how to benchmark computers against each other. For instance, we know the Cray 1A at the CMoA had 160 Megaflops of computing power, while a Raspberry Pi 4 has 13,500 Megaflops of computing power according to the University of Maine. What can you do with a megaflop of power however? How does that translate in the real world.

I’m considering a calculation matrix that would use one of two metrics. For older computers, how many places of Pi can they calculate in X amount of time. Say 100 seconds? For newer computers, how long does it take for the machine to calculate Pi to 16 Million places. Here are my early examples:

Pi to 10,000 Places on Raspberry Pi

ComputerProcessorRAMElapsed TimeHow Calculated
Raspberry Pi Model 3ARM SomethingSomething6 Min 34 Sec
394 Seconds
BC #1 (Raspbian)
Raspberry Pi Model 3ARM SomethingSomething2 Min 15 Sec
135 Seconds
BC #2
(Raspbian)
Raspberry Pi
Model 3
ARM Something0 Min 0.1 SecPi command

Pi to 16,000,000 Places

ComputerProcessorRAMPi to 16M Places TimeHow Calculated
Lenovo Yoga 920Intel Core i7-8550U CPU @ 1.8 GHz16 GB9 Min 55 Sec
595 Seconds
SuperPi for Windows Version 1.1
Lenovo Yoga 920Intel Core i7-8550U CPU @ 1.8 GHz16 GB0 Min 23 SecPi command
N4BFR Vision DesktopIntel Core i7-12700K CPU @ 3.6 GHz32 GB3 Min 15 Sec
195 Seconds
SuperPi for Windows Version 1.1
Raspberry Pi Model 3B+ARM 7 Rev 4 (V71)1 GB6 Min 03 Sec
363 Seconds
Pi command

Tools I am considering to use will be an issue because I want consistent performance across operating systems. Efficiency will be an issue because I will want something that computes at roughly the same speed for windows as for Unix.

  • SuperPi for Windows 1.1 was the first I came across and it seemed to be pretty straightforward that would run on many versions of Windows I came across.
  • Moving on to a calculator I could use in Unix, I found this John Cook Consulting Website that had a couple of calculations using the BC program. I found the results inconsistent on the Lenovo Yoga 920
BC Calculation 1: time bc -l <<< "scale=10000;4*a(1)"

BC Calculation 2: time bc -l <<< "scale=10000;16*a(1/5) - 4*a(1/239)"

I then found the Pi command on pi that might be more consistent with what I need.

$ time pi 10000

Pi Calculations on Lenovo Yoga 920
Windows time is reported by SuperPi. BC time is “Real” time reported by process.

Pi Calculated to X Places. X=Windows TimeBC `BC 2Pi Command
10K (Pi Compairison)1 Min 45 Sec0 Min 32 Sec
0 Min 35 Sec
0.09 Sec
20 K3 Min 22 Sec0.
50KIncomplete after 15 minutes
128K0 Min 01 SecIncomplete after 60 Minutes
512K0 Min 08 Sec
1 M0 Min 16 Sec
8 M3 Min 05 Sec
16 M9 Min 55 Sec0 Min 23 Sec

So using BC as a method of calculating does not seem to scale.



Coming back to this a few days later, I may have a partial solution. This will limit the use of this on older machines, but should be fairly consistent with newer ones. I plan to do the calculation with a script in Python 3. This should allow for roughly similar performance on the same machine to make results more comperable.

Python3 Downloads: https://www.python.org/downloads/release/python-3105/

Python3 methods for calculating Pi: https://www.geeksforgeeks.org/calculate-pi-with-python/

I was able to get a rudimentary calculation in Windows using both of the formulas and include a function to time the process consistently. Now I need to compare in Linux and blow out the calculation to allow a material number of places for this to be an effective measure.

I have found a few more options thanks to StackOverflow and I’m testing them now on my 12th Gen Intel machine.

  • 100,000 digits of Pi using the “much faster” method proposed by Alex Harvey: 177.92 seconds for the first pass, 177.83 seconds for the second pass. I like the consistency
  • Guest007 proposed an implementation using the Decimal library. I attempted a 10,000 digit calculation and that took 24.6 seconds, 100,000 places didn’t complete after more than 10 minutes. Interestingly, a peek at the system processing said it was only running 8.1% of CPU time.

Tomorrow I’ll start a new chart comparing these two methods across multiple machines.

Raspberry Pi GPS Time Server with Bullseye

I’m into accurate time. Ever since I stumbled across the SatSignal.eu site I have been running a Raspberry Pi on my network as a Stratum 1 time server. For those not familiar with the stratum, the only level higher is Stratum 0 and that is reserved for the absolute standard of time sources like the National Institute of Technology clock and GPS Satellites.

2016 Raspberry Pi Clock showing leap second addition at the end of 2016

I had been having some entropy on my current set of 6 GPS clocks from various issues, so I decided to rebuild my clock from the base install of the new Raspbian Bullseye distribution. Since I didn’t see a single definitive source, I put this listing together and I’m glad to share it with the community because it has been good to me with previous builds. My sources include SatSignal.eu, tomasgreno.cz, and adafruit.com. Much of what I did is just compiling and changing the order of some steps slightly to minimize reboots. Those others may work better for you, but this version worked for me.

Let’s talk hardware. I have done this project with a Raspberry Pi 1 through a Pi 4 as well as the Pi Zero and Zero W. I prefer the form factor of the full sized Pi to go along with the GPS hardware, but as long as you can make the GPIO connections from the GPS to the Pi all should work.

For a GPS module I use the Adafruit Ultimate GPS with the following pin connections. If you want to use something different, consult the breakout manufacturer and use pinout.xyz to set the proper connections. For my connections I typically use:

GPS Breakout PinRaspberry Pi Pin
VIN (Voltage in)Pin 4 – 5V Power
GND (Ground)Pin 6 – Ground
RX (Receive, to get data from the Pi TX)Pin 8 – GPIO 14 – UART TX
TX (Transmit, to send data to the Pi RX)Pin 10 – GPIO 16 – UART RX
PPS (Pulse Per Second)Pin 12 – GPIO 18

It’s not a typo, make sure TX goes to RX on the other board and vice versa.

Now on to software. Start with a clean version of Raspbian Bullseye on an MicroSD. I downloaded mine from the official RaspberryPi .com website. I used the “Raspberry Pi OS with Desktop” version and used an 8 GB MicroSD card as the media. I’m skipping the items related to base configuration of the host name and other start-up items, there are other sources for that. All the commands you see will be via the command prompt.

The instructions from here forward assume you have a working Raspberry Pi, connected to the internet with the GPS attached.

  • Start by adding two additional lines to the /boot/config.txt file. This starts the process to disable Bluetooth on the Pi and sets the Pulse Per Second GPIO Pin if your GPS supports it.
    • Note in this document, the command following $ gets entered at the command prompt, other commands are entered inside the file, at the bottom on a new line is usually good. Once commands are entered, use Ctrl-X, Y and Enter to save and exit the file and return to the command prompt. And yes, I use NANO as my text editor. You should use what you want. I’m not a text editor drill sergeant.
$ sudo nano /boot/config.txt

#Changes for GPS Clock
dtoverlay=pi3-miniuart-bt
dtoverlay=pps-gpio,gpiopin=18 (Customize to appropriate pin)
  • Disable Bluetooth in system control
$ sudo systemctl disable hciuart
  • Add a reference to /etc/modules to software for PPS management we will install shortly.
$ sudo nano /etc/modules

pps-gpio
  • Run a complete set of updates to the Pi Software
$ sudo apt-get update
$ sudo apt-get dist-upgrade
$ sudo rpi-update
$ sudo reboot

Once the computer has rebooted, it’s time to begin installing the key software.

  • Install PPS tools and a set of system libraries
$ sudo apt-get install pps-tools
$ sudo apt-get install libcap-dev
$ sudo reboot
  • Now let’s test to see if the PPS software was installed by checking some OS boot logs
$ lsmod | grep pps 

You should get two responses back that look something like this. Don’t worry if the numbers are different.

$ dmesg | grep pps

Make sure you have a line that says “new PPS source…”

  • Once you see both of those, we can check and see if the GPS is sending data. Your GPS must have a “fix” which means it’s getting data from at least three satellites in order for this to work.
$ sudo ppstest /dev/pps0

Success looks like this:

Don’t worry about the specific numbers, just look for incrementing sequence numbers. The data will continue to populate every second until you hit CTRL-C to stop it.

  • Moving on, we have installed the GPS module and gotten data from part of it, but have not installed the main GPS software set yet. This should do it:
$ sudo apt-get install gpsd gpsd-clients gpsd-tools 

Once those are complete we can take a look at the data coming from the GPS by peeking at the port.

$ sudo cat /dev/ttyAMA0

You should get a continuing output with lines like this. I look for lines that start with $GPRMC (Specific location obscured by X’s)

pi@Telstar5A:~ $ sudo cat /dev/ttyAMA0
$GPGGA,220752.000,33XX.XXXX6,N,084XX.XXXX,W,1,07,1.13,278.6,M,-30.9,M,,*5E
$GPGSA,A,3,04,03,26,31,22,27,16,,,,,,1.46,1.13,0.92*0A
$GPRMC,220752.000,A,33XX.XXXX,N,084XX.XXXX,W,0.27,216.85,171121,,,A*7C
$GPZDA,220752.000,17,11,2021,,*51

Again, CTRL-C to stop it. If you get a stream of data and it’s gibberish your GPS may be sending at a different rate. A good place to start if you see that is this SatSignal.eu page which looks at other GPS modules and other methods.

  • Now, let’s temporarily send that data to some GPS software for interpretation.
$ sudo gpsd /dev/ttyAMA0 -n -F /var/run/gpsd.sock

Then we’ll open the GPSMON software to look. (There’s also a tool called CGPS. Use either, this is a personal preference thing)

$ gpsmon
Location obscured for privacy.

The screenshot above will tell you your exact position, the number of satellites your GPS sees, and the status of your PPS data all in one screen. Did I mention you CTRL-C to get out of a screen like this? Because you do.

  • Configure the GPS software to auto-start when you boot your machine. I have seen a couple of different processes, but this one works consistently for me.
$ sudo nano /etc/default/gpsd

Unlike the other file edits where you add a line, this is what the whole file should look like when you are done. You may just want to cut and paste this whole section, or type it in, whatever works for you, I won’t judge.

#Updated for GPS Pi Clock

START_DAEMON="true"

# Devices gpsd should collect to at boot time.

GPSD_SOCKET="/var/run/gpsd.sock"

# They need to be read/writeable, either by user gpsd or the group dialout.
DEVICES="/dev/ttyAMA0"

# Other options you want to pass to gpsd
GPSD_OPTIONS="-n"
GPSD_SOCKET="/var/run/gpsd.sock"

# Automatically hot add/remove USB GPS devices via gpsdctl
USBAUTO="false"
  • Almost done with the GPS section. Four more commands to go.
$ sudo systemctl stop gpsd.socket
$ sudo systemctl disable gpsd.socket
$ sudo ln -s /lib/systemd/system/gpsd.service /etc/systemd/system/multi-user.target.wants/
$ sudo reboot

That third $ command (between “disable” and “reboot” goes on a single line, this blog text tool wraps it. It should look like this:

  • If you want to reconfirm everything is working again after reboot, run GPSMON like above and look at the pretty data fly by. Now let’s connect the GPS to the clock. I’m choosing to use NTP as my time server software for this project. You might want to play with Chrony as well.
$ sudo apt-get install ntp

Once that is done, you want to stop the timesyncd service that is installed by default with Bullseye and replace it with NTP.

$ sudo systemctl stop systemd-timesyncd
$ sudo systemctl disable systemd-timesyncd
$ sudo service ntp stop
$ sudo service ntp startntp q -

Let’s test. “Out of the box” the NTP software checks with servers on the internet to get the time. It will look something like this:

$ ntpq -p -c rl
The * on the left indicates the chosen server, this one is at Georgia Tech.

Great news! Clock is syncing, but if you look at the bottom you’ll see after “leap=00” it says “stratum=2” which is nice, but we want to use the GPS to make it a Stratum 1 clock.

  • It’s time to cross the streams and point the NTP software to look at the GPS and PPS signals for time. That means editing the NTP configuration file.
$ sudo nano /etc/ntp.conf

There are a lot of other settings in the file, so I won’t give the whole file this time but here’s what I recommend. Scroll down until you get to this section:

Use the # sign as the beginning of a line to comment out several of those “debian.pool” lines. You do want to keep an internet server on the list as a backup and for diversity, but you won’t need all of them. Save that for the folks that don’t have satellite time at home. Just below the “pool” entries, add each of the 6 lines on a new line:

# Kernel-mode PPS reference-clock for the precise seconds
server 127.127.22.0 minpoll 4 maxpoll 4
fudge 127.127.22.0 refid PPS

# Coarse time reference-clock - nearest second
server 127.127.28.0 minpoll 4 maxpoll 4 iburst prefer
fudge 127.127.28.0 time1 +0.105 flag1 1 refid GPS

If you want to use different servers on the internet, there are plenty to supplement. The manual page about ntp.conf can tell you more about other things you can do with this file.

When your changes are made it should look like this.

Do that cool CTRL-X thing and get out of there before you break anything (kidding).

Time to get the NTP client to read the new configuration file.

$ sudo service ntp restart

It sometimes helps to reboot too. Your call.
Now let’s check and see what time source we are using:

$ ntpq -p -c rl

Success! Why? Three things you want to see on this screen:
1 – The SHM / .GPS. line has a * next to it, indicating it’s the primary time source. In the “st” column you can see a 0 which indicates it’s connected to a “Stratum 0” source.
2 – The PPS / .PPS. line has an o next to it, indicating it is a “PPS peer” and it’s getting very specific pulse data from the GPS signal. It’s also a “Stratum 0” source.
3 – The “stratum” field for your NTP server now is “stratum=1” which is pretty much the best you can get as a home user.

It may take a little bit for the PPS to settle in as the primary time source, so don’t worry if it doesn’t do it in the first 5 minutes.


So, that’s the project. Why do you need this? Well, I do it for fun, but there are several applications that require very accurate time. For instance in Ham Radio the cycles for a program like FT8 depend on an accurate clock to switch between receive and send modes. Is this the thing I’m going to replace a Rubidium time standard with? No, but for about $100 bucks it’s a nice thing to have an a good early project for someone learning about Raspberry Pi. You can set Windows, Mac or Linux clients to point to your home server for time instead of time.windows.com or other sources.

One final note, this is accurate for me as of the time in that last screen shot. Something is bound to change eventually, so expect these instructions to drift over time as things change. Figuring that out is one of the fun things for me.

If you do this project and want to share success, you can tweet me @N4BFR or find me in other place on the internet.

Why I am leaving Amazon Sidewalk on.

Lots of stories about how Amazon is activating their Sidewalk network this week. I think that will really benefit people, here’s why.

It’s Open Source.

It’s not just a network for Amazon, it is a network for IOT devices. Several non-Amazon companies like Tile will have the same access to the same network.

It’s Double Encrypted

Both individual packets and the connection are encrypted. So even if the security of the network is breached, the data is still locked.

It’s not proprietary data.

This data is going to be anonymous, there is no PII data in the packet. Just something saying “I’m here” or “No Mail Yet.”

It’s not for web surfing.

The maximum amount of bandwidth used is low, and there are caps on how much is used per month. It is not going to slow you down and it is not going to eat up your bandwidth cap.

It’s way more efficient than Cellular

The low power aspect to this is very appealing. To use trackers with Wifi or Cellular would suck down your batteries. Think of being able to out solar power weather stations all over the neighborhood

You are probably already doing this.

Have an iPhone? It is probably sending tracker data back from air tags via your connection.

It’s more bandwidth efficient.

Sending data over the cellular network is expensive on a cost per bit basis. Your home internet connection is a fraction of that. Having this data go on the home network frees cell data for other things that may be more important to you.

No apologies for Amazon on how they are rolling this out. An opt-out option is challenging to make people aware of. An opt-in model lacks scale. These are the types of things a network manager needs to take into consideration.

So, for bandwidth efficiency, broad scale and robust security of low fidelity telemetry type data, I feel this network benefits way beyond the stumble Amazon has had with it’s launch