GPS uses relativity for accuracy

Superfact 23: GPS uses relativity for accuracy. Global Positioning Systems or GPS uses Special Relativity and General Relativity to guide you to your destination. In fact, GPS systems would be rendered useless without the Theories of Relativity.

Businessman finger pin for location points and search addresses on the world map application. Marking destination for travel or finding business places in GPS Satellite coordinates system online web | GPS uses relativity for accuracy
Stock Photo ID: 2502019165 by mayam_studio

Did you use Einstein’s Theories of Relativity to get to the grocery store today?

The theories of relativity may seem strange and impractical, something you only use for astrophysics, black holes, cosmology and extreme velocities. They feature strange concepts such as time dilation, the stretching and bending of space, events simultaneous to some are not to others, the universal constancy of the speed of light in vacuum, the energy and mass equivalency, etc. 

Therefore, it is a bit surprising that without the theories of relativity the GPS app on your phone would not be able to guide you to the grocery store. That’s why I call it a super fact that GPS uses relativity for accuracy.

Space satellite orbiting the Earth. 3D rendering
Stock Illustration ID: 1372134458 by Boris Rabtsevich

GPS and Time Dilation

GPS is a satellite-based  radio navigation system that provides location information and time anywhere on Earth. It is amazingly accurate. The basic GPS service provides users with approximately 7.0-meter accuracy, 95% of the time, anywhere on or near the surface of the earth.

The fact that the information is provided by satellites that orbit earth at high speeds and high above earth’s surface makes General Relativity and Special Relativity necessary. The GPS system needs to calculate precisely the time it takes for signals to travel from the satellites to a receiver on Earth for it to work. GPS satellites travel at high speeds causing a large enough time dilation that must be accounted for. In addition, they orbit earth high above earth’s surface where earth’s gravitational field is weaker than on earth’s surface. Clocks run faster in weaker gravitational fields due to gravitational time dilation, so you must correct that as well.

If you ignore relativity, you will accumulate a discrepancy of six miles in one day.  You are not going to find the grocery store that way, unless you use the old-fashioned method of reading a map. In a sense, if your GPS device finds the grocery store for you, you have proven Einstein right.

Below is a YouTube video animation visualizing the GPS system.


GPS Facts

  • The GPS project was started by the U.S. Department of Defense in 1973. It is also owned by the U.S. Department of Defense.
  • The GPS satellites were sent up by the United States Air Force (and not NASA).
  • The first NAVSTAR satellite, later called GPS, was launched in 1978.
  • There are 31 GPS satellites currently in orbit.
  • The system requires 24 GPS satellites.
  • The 24-satellite system became fully operational in 1993.
  • The Global Positioning System cost (the US government) $1.8 billion annually to operate and maintain.
  • The Global Positioning System is free to use for the public worldwide.
  • Making GPS free to civilians worldwide was a decision by President Ronald Reagan in 1983 after a Korean airliner was shot down for straying off course.
  • GPS satellites carry extremely accurate atomic clocks. As explained, GPS must account for relativity, special relativity as well as General Relativity.
  • Other satellite systems help improve GPS, including WAAS (in the U.S.), EGNOS (in Europe), and MSAS (in Japan).
  • GPS is not the only satellite navigation system. Other countries have their own satellite navigation systems. GLONASS (Russia), Galileo (EU), and BeiDou (China).
  • Ukraine is helped by both GPS and Galileo.
  • Russian forces have been actively jamming GPS signals in Ukraine.

Uses of GPS

  • Examples of consumer electronics that use GPS are smart phones, tablets, Smartwatches, Car navigation systems, Cameras (DSLR with GPS), some models of laptops, fitness trackers (Fitbit), and drones.
  • Examples of vehicles using GPS are cars, delivery vans, trucks, aircraft, trains, ships and boats.
  • Military uses of GPS include guided missiles, guided munitions, tactical radios, communication systems, soldier-worn devices for location tracking, military vehicles and military aircraft.
  • Additional examples of GPS use include construction equipment for site positioning and machine guidance,  tractors for precision farming and other agricultural machinery, surveying equipment, pipeline inspection drones, other inspection drones and rovers, emergency locator beacons, pet trackers, smart collars, livestock monitoring, personal trackers, and geocaching devices.

As you can see, GPS is extremely useful, and there are a lot of interesting facts about GPS.


To see the other Super Facts click here

The Nobel Prize in Physics and Neural Networks

“The Nobel Prize in Physics and Neural Networks” is not a super-fact but just what I consider interesting information

The Nobel Prizes are in the process of being announced. The Nobel Prize in Physiology or Medicine, Chemistry, Physics and Literature have been announced and the Nobel Prize in Peace will be coming up at any minute. The Nobel Prize in Economics will be announced October 14.

The Nobel Prize in Peace tends to get the most attention but personally I focus more on the Nobel Prizes in the sciences. That may be because of my biases, but those prizes also tend to be more clearcut and rarely politized. Nobel Prize in Peace is announced and given in Oslo, Norway, and all the other prizes are announced and given in Stockholm, Sweden.

Nobel Prize In Physics

What I wanted to talk about here is the Nobel Prize in Physics given to John J. Hopfield and Geoffrey J. Hinton. They made a number of important discoveries in the field of Artificial Intelligence, more specifically neural networks. This is really computer science, not physics. However, they used tools and models from physics to create their networks and algorithms, which is why the Nobel committee deemed it fit to give them the Nobel Prize in Physics.

Perhaps we need another Nobel Prize for computer science. It is also of interest to me because I’ve created and used various Neural Networks myself. It was not part of my research or part of my job, so I am not an expert. For all of you who are interested in ChatGPT, it consists of a so-called deep learning neural network (multiple hidden layers) containing 176 billion neurons. By the way that is more than the 100 billion neurons in the human brain. But OK, they aren’t real neurons.

So, what is an artificial neural network?

A simple old-style 1950’s Neural Network drawing | The Nobel Prize in Physics and Neural Networks
A simple old-style 1950’s Neural Network (my drawing)

The first neural networks created by Frank Rosenblatt in 1957 looked like the one above. You had input neurons and output neurons connected via weights that you adjusted using an algorithm. In the case above you have three inputs (2, 0, 3) and these inputs are multiplied by the weights to the outputs.
3 X 0.2 +0 + 2 X -0.25 = 0.1 and 3 X 0.4 + 0 + 2 X 0.1 = 1.4 and then each output node has a threshold function yielding outputs 0 and 1.

To train the network you create a set of inputs and the output that you want for each input. You pick some random weights and then you can calculate the total error you get, and you use the error to calculate a new set of weights. You do this over and over until you get the output you want for the different inputs. The amazing thing is that now the neural network will often also give you the desired output for an input that you have not used in the training. Unfortunately, these neural networks weren’t very good, and they often failed and could not even be trained.

In 1985/1986, Geoffrey Hinton, David Rumelhart and Ronald J. Williams presented an algorithm applied to a neural network featuring a hidden layer that was very successful. It was effective and guaranteed to learn patterns that were possible to learn. It set off a revolution in Neural Networks. The next year, in 1987, when I was a college student, I used that algorithm on a neural network featuring a hidden layer to do simple OCR (optical character recognition).

Note that a computer reading an image with a letter is very different from someone typing it on a keyboard. In the case of the image, you must use OCR, a complicated and smart algorithm for the computer to know which letter it is.

A multiple layer neural network with one hidden layer. This set-up and the associated backpropagation algorithm set off the neural network revolution. Drawing.
A multiple layer neural network with one hidden layer. This set-up and the associated backpropagation algorithm set off the neural network revolution. My drawing.

In the network above you use the errors in a similar fashion to the above to adjust the weights to get the output you want, but the algorithm, the backpropagation algorithm is very successful.

Below I am showing two 10 X 10 pixel images containing the letter F. The neural network I created had 100 inputs, one for each pixel, a hidden layer and then output neurons corresponding to each letter I wanted to read. I think I used about 10 or 20 versions of each letter during training, by which I mean running the algorithm to adjust the weights to minimize the error until it is almost gone.

Now if I used an image with a letter that I had never used before, the neural network typically got it right even though the image was new. Note, my experiment took place in 1987. OCR has come a long way since then.

Two examples of the letter F in a 10 X 10 image | The Nobel Prize in Physics and Neural Networks
Two examples of the letter F in a 10 X 10 image. You can use these images (100 input neurons) to train a neural network to recognize the letters F.

At first, it was believed that adding more than one hidden layer did not add much. That was until it was discovered that by applying the backpropagation algorithm differently to different layers created a better / smarter neural network and so at the beginning of this century the deep learning neural network was born (or just deep learning AI). Our Nobel Prize winner Geoffrey J. Hinton was a pioneer in deep learning neural networks.

Drawing of a deep learning neural network (deep learning AI). There are three hidden layers.
My drawing of a deep learning neural network (deep learning AI). There are three hidden layers.

I should mention that there are many styles of neural networks, not just the ones I’ve shown here. Below is a network called a Hopfield network (it was certainly not the only thing he discovered).

Four neurons that are all connected to each other | The Nobel Prize in Physics and Neural Networks
In a Hopfield network all neurons are input, and output neurons and they are all connected to each other.

For your information, ChatGPT-3.5 is a deep learning neural network like the one in my colorful picture above, but instead of 3 hidden layers it has 96 hidden layers in its neural network and instead of 19 neurons it has a total of 176 billion neurons. Congratulations to John J. Hopfield and Geoffrey J. Hinton.


To see the Super Facts click here