Artificial Intelligence is Not New

Superfact 88: The history of artificial intelligence (AI) began in antiquity, with stories of artificial beings. The first artificial neural network model was created in 1943. The Turing test was created in 1950. The field of “Artificial Intelligence Research” was founded as an academic discipline in 1956. The first trainable (able to learn) neural network was demonstrated in 1957.

Since then, artificial intelligence has come a long way. Did you hear about the computer that defeated the reigning world champion in chess? A computer finally defeated the supreme human intellect in the world in an intellectual field. Is this the end of humanity? Oh, wait, that was in 1997.

White female AI robot using a microscope in the scientific laboratory. | Artificial Intelligence is Not New
Artificial intelligence and research concept. Shutterstock Asset id: 2314449325 by Stock-Asso

The various recent launches of large language models such as ChatGPT, Gemini, Claude, Llama, Deep Seek, etc., have impressed many people but also fooled many people into thinking that Artificial Intelligence is a new invention. It is not. Artificial Intelligence has been around for a long time, and its past is filled with many success stories as well as disappointments. Click here  to see a timeline for Artificial Intelligence stretching from antiquity to 2025. For additional sources click here, here, here, or here.

I consider this a super fact because it is true, kind of important, and based on my personal experience I believe that the long old history of Artificial Intelligence is a surprise to many.

My Personal Experience with Artificial Intelligence

In 1986, when I was in college in Sweden, I took a class in the LISP programming language. LISP was the first Artificial Intelligence programming language, and it was invented in 1958. In 1987, as a university level exchange student, I took a class called Artificial Intelligence at Case Western Reserve University. The book we used was Artificial Intelligence by Elaine Rich published in 1983. This book and the course were focused on decision trees and rule based algorithms and did not even mention neural networks.

That same year I also took a class called Pattern Recognition which introduced neural networks to me. In 1986 a landmark paper was published by David Rumelhart, Geoffrey Hinton, and Ronald Williams which introduced the Rumelhart backpropagation algorithm. Geoffrey Hinton received the Nobel Prize in physics in 2024. David Rumelhart and Ronald Williams were both dead and could therefore not receive the Nobel Prize. The Nobel Prize was also given to John J. Hopfield, another pioneer in neural networks. He invented the Hopfield network. You can read more about neural networks and the Nobel Prize in physics in 2024 here.

The Rumelhart backpropagation algorithm was a giant leap forward for neural networks and for Artificial Intelligence and it is the algorithm used by ChatGPT and the other large language models. Geoffrey Hinton is often interviewed in media and often presented as the father of Artificial Intelligence. He is not, but he is responsible for arguably the greatest leap forward in neural networks, as well as Artificial Intelligence.

In class we used the Rumelhart backpropagation algorithm to read images with text. It is one thing to type in a character on a keyboard and quite another to have a computer identify a character in an image. We trained our primitive neural networks to recognize images of letters using the Rumelhart backpropagation algorithm. We coded the backpropagation algorithm using the C programming language over perhaps 100 neurons/parameters and a few hundred synapses/weights (in AI). It worked pretty well. In comparison, ChatGPT 4 is estimated to have 1 trillion neurons/parameters. Our class was among the first in the world to try out this, at the time, new algorithm and at the time I did not realize the importance of it.

Later I did research and I worked in the field of Robotics where I implemented various Artificial Intelligence algorithms but not neural networks. I have a PhD in Applied Physics and Electrical Engineering with specialty in Robotics. At my next workplace Siemens I used decision tree algorithms, also Artificial Intelligence but not neural networks.

What is a Neural Network

Three blue circles connected to two red circles via lines assigned weights.
A simple old-style 1950’s Neural Network (my drawing)

The first neural networks created by Frank Rosenblatt in 1957 looked like the one above. You had input neurons and output neurons connected via weights that you adjusted using an algorithm. In the case above you have three inputs (2, 0, 3) and these inputs are multiplied by the weights to the outputs. 3 X 0.2 +0 + 2 X -0.25 = 0.1 and 3 X 0.4 + 0 + 2 X 0.1 = 1.4 and then each output node has a threshold function yielding outputs 0 and 1.

To train the network you create a set of inputs and the output that you want for each input. You pick some random weights and then you can calculate the total error you get, and you use the error to calculate a new set of weights. You do this over and over until you get the output you want for the different inputs. The amazing thing is that now the neural network will often also give you the desired output for an input that you have not used in the training. Unfortunately, these neural networks weren’t very good, and they sometimes could not even be trained.

As mentioned, in 1986, Geoffrey Hinton, David Rumelhart and Ronald J. Williams presented the Rumelhart backward propagation algorithm which were applied to a neural network featuring a hidden layer (at least one hidden layer). It was effective and it was guaranteed to learn patterns that were possible to learn. It set off a revolution in Neural Networks. In the network below you also use the errors in a similar fashion as in the Rosenblatt network. However, the combination of a hidden layer and the backpropagation algorithm make a huge difference.

Three blue circles connected to four yellow circles connected to two red circles all via lines assigned weights.
A multiple layer neural network with one hidden layer. This set-up and the associated backpropagation algorithm set off the neural network revolution. My drawing.

Below I am showing two 10 X 10 pixel images containing the letter F. The neural network I created in class (see above) had 100 inputs, one for each pixel, a hidden layer and then output neurons corresponding to each letter I wanted to read. I think I used about 10 or 20 versions of each letter during training, by which I mean running the algorithm to adjust the weights to minimize the error until it is almost gone. Now if I used an image with a letter that I had never used before, the neural network typically got it right even though the image was new.

The 10 X 10 pixel images are filled with black pixels resembling two differently looking characters F | Artificial Intelligence is Not New
Two examples of the letter F in a 10 X 10 image. You can use these images (100 input neurons) to train a neural network to recognize the letters F.

At first, it was believed that adding more than one hidden layer did not add much. That was until it was discovered that by applying the backpropagation algorithm differently to different layers created a better / smarter neural network and so at the beginning of this century the deep learning neural networks were born (or just deep learning AI). I can add that our Nobel Prize winner Geoffrey J. Hinton was also a pioneer in deep learning neural networks.

Three blue circles connected to four yellow circles connected to four green circles connected to six blue circles connected to two red circles all via lines representing weights.
My drawing of a deep learning neural network (deep learning AI). There are three hidden layers.

I should mention that there are many styles of neural networks, not just the ones I’ve shown here. Below is a network called a Hopfield network (it was certainly not the only thing he discovered).

Four neurons that are all connected to each other.
In a Hopfield network all neurons are input, and output neurons and they are all connected to each other.

For your information, ChatGPT-3.5 and ChatGPT-4 are deep learning neural networks, like the one in my colorful picture above, but instead of 3 hidden layers it has 96 hidden layers in its neural network and instead of 19 neurons it has a total of 176 billion neurons.

Note on potential harm of AI

The potential harm of AI is a related and important topic that I did not address. However, this is already a very long and complex post, and I don’t know enough about this topic (yet). To read more about this topic check the comments made by “Grant at Tame Your Book” (in comment section).




To see the Other Super Facts click here

The Speed of Light In Vacuum Is a Universal Constant

Superfact 4 : The Speed of Light In Vacuum Is a Universal Constant

The speed of light in vacuum is a universal constant. The speed of light in vacuum is the same for all observers regardless of their speed and the direction in which they are going. It is always c = 299,792,458 meters per second. If you try to catch up to a light beam and try to travel close to the speed of the light beam, you will not be able to catch up. The speed of the light beam will still be c = 299,792,458 meters per second compared to you no matter how fast you go. This is possible because time and space don’t behave like we expect.

Superfacts

This is the fifth post of my super-factful blog and my fourth super-fact. As I mentioned previously, the goal of this blog is to create a long list of facts that are important and known to be true and yet are either disputed by large segments of the public or highly surprising or misunderstood by many.

These facts are not trivia, they are accepted as true by the experts in the relevant fields, the evidence that the fact is true is impressive, and they are important to the way we view the world and to what we believe, and despite being known to be true they are hard pills to swallow for many. They are not scientific theories or complicated insights but facts that can be stated simply. In a paragraph or less. They may need more explanation than you can fit in one paragraph, but they can be stated, with a brief explanation in just one paragraph.

The Fourth Superfact

My fourth super-fact is that the speed of light in vacuum compared to yourself is the same regardless of your motion. A beam from a flashlight you are pointing forward is traveling at a specific speed c = 299,792,458 meters per second forward, no matter what you are comparing to. It is important to understand that speed is relative. If you drive 95 miles per hour on a Texas highway you are driving 95 miles per hour compared to the pavement, but you are traveling more than 2,000 miles per hour compared to the moon.

However, a light beam will be traveling at the speed of c = 299,792,458 meters per second (186,000 miles per second) compared to the pavement and also compared to the moon, the sun, the galaxy, the fastest spaceship possible and another light beam. The speed of light in vacuum is not relative. For light in vacuum there is only one speed compared to everything.

Someone passing you at the speed of 99.99% of the speed of light in vacuum will measure his flashlight beam to have the speed c = 299,792,458 meters per second and he will measure your flashlight beam to have the speed c = 299,792,458 meters per second and so will you. It is as if c + c = c. 1 + 1 = 1 not 2, didn’t you know? This is logically possible because time and space is different for different observers.

This is quite shocking if you haven’t come across it before and there are a lot of people (not professional physicists) who refuse to believe it. So, in my opinion it is a super fact. In summary:

No matter how fast you travel, or in what direction, or where you are, you will measure the speed of light in vacuum compared to yourself to be c = 299,792,458 meters per second or approximately 186,000 miles per second or 671 million miles per hour. That goes for all light beams passing by you regardless of origin.

The picture shows two people Alan and Amy. Alan is on the ground. Amy is flying by Alan in a rocket speeding left. Both Alan and Amy are pointing lasers to the left.
In this picture Amy is traveling past Alan in a rocket. Both have a laser. Both measure the speed of both laser beams to be c = 299,792,458 meters per second.

In the picture above let’s say Amy is flying past Alan at half the speed of light. If you believe Alan when he says that both laser beams are traveling at the speed of c = 186,000 miles per second, then you would expect Amy to measure her laser beam to travel at a speed that is half of that c/2 = 93,000 miles per hour, but she doesn’t. She measures her laser light beam to travel at the speed of c = 186,000 miles per second just like Alan. This seems contradictory.

The solution that the special theory of relativity offers for this paradox is that time and space are relative and Amy and Alan measure time and space differently (more on that in another post).

The Speed of Light In Vacuum Is a Universal Constant
Time is going to be different for me than for you. From shutterstock Illustration ID: 1055076638 by andrey_l

I should add that the realization that the speed of light in vacuum is a constant regardless of the speed or direction of the observer or the light source was a result of many experiments, which began with the Michelson-Morley experiments at Case Western Reserve University, Cleveland, Ohio in the years 1881-1887.

At first scientists thought that there was an ether, which acted as a medium for light. They assumed that earth would be moving through this ether. What they tried to establish was earth’s velocity through the ether, but all measurements resulted in light always having the same speed, in all directions, all the time, in summer and in winter, no matter in which direction earth was going. At first, they tried to explain this by saying that the ether compressed the experimental equipment and distorted clocks exactly so that it seemed like the speed of light in vacuum always came out the same.

Others said that earth was dragging the ether with it, but that explanation turned out not to hold water. With the special theory of relativity in 1905 those speculations were laid to rest. It was the way time and space were constructed and connected.

This is a drawing of the Michelson interferometer used at Case Western Reserve University
The first Michelson-Interferometer from 1881. It was used to measure the speed difference of two light beams (well a split light beam) with a very high accuracy (for the time). The light traveled with the same speed in all directions and no matter what earth’s position and speed was in its orbit around the sun. This picture is taken from Wikipedia and is in the public domain of the United States.
The speed c = 299,792,458 meters per second is a universal speed limit created by time and space

I should point out that there is nothing magical about the speed of light in a vacuum. Light traveling through matter, like glass or water, does not travel at this speed c, but slower. That is why I keep saying the “speed of light in vacuum” instead of “the speed of light”.

It is also not entirely correct to say that the speed of light in vacuum is a universal constant, because it isn’t only about the speed light. It is just that light that travels unimpeded through vacuum reaches the universal speed limit created by time and space, or the space-time continuum (that’s another post). The light is prevented from traveling infinitely fast by this speed limit, and light is not the only thing behaving this way. All massless particles / radiation is prevented from reaching infinite speed by this universal speed limit and they will also travel with exactly the same speed c = 299,792,458 meters per second compared to all observers, just like light in vacuum.

So how is time and space arranged to cause this universal speed limit? Well, that is a surprising super fact post for another day (I will link to it once I have made the post). I can add that the discovery that light in vacuum is a universal constant changed basically everything in physics. We had to change the equations and the physics regarding not just time and space but energy, momentum, mass, force, electromagnetics, space geometry, particle physics, and much more. The energy and mass equivalency is a direct result of this E = mc2.

Examples:

Below are some examples of what this discovery led to. Again, don’t worry about the details or how it works. I might explain these effects in future super fact posts and link to them.

  • Time for travelers moving fast compared to you is running slower.
  • Length intervals for travelers moving fast compared to you are contracted.
  • Simultaneous events may not be simultaneous for another observer.
  • The order of events may be reversed for different observers.
  • If you accelerate to a speed that is 99.999% of the speed of light you still haven’t gotten any closer to the speed of light from your perspective. Light in vacuum will still speed off from you at c = 186,000 miles per second. You think you’ll keep accelerating but that the light keeps accelerating just as much ahead of you. You cannot catch up. What other observers see is you accelerating less and less and never catch up even though you get closer.
  • Forces, the mass of objects, momentum, energy and many other physical quantities will reach infinity as you approach the speed of light in vacuum assuming you are not a massless particle.
  • Mass is energy and vice versa E = mc2
  • Magnetic fields pop out as a relativistic side-effect of moving charges.
The E = mc2 formula | The Speed of Light In Vacuum Is a Universal Constant
Mass is energy and vice versa, a direct result of the way time and space are related. Stock Photo ID: 2163111377 by Aree_S
Can We Travel Faster Than The Speed Of Light?

So, it seems like we cannot travel faster than the speed of light in vacuum. It seems like the universal speed limit is a hard limit, unlike the speed limits on Texas highways. That is maybe true, at least locally where we are.

However, you could get around it, by what is kind of cheating, by stretching and bending space to the extreme by using, for example, enormous amounts of negative energy. That’s happening to our Universe over a scale of tens of billions of lightyears. I should add that a lightyear is the distance light in vacuum travel in one year. Stretching and bending space is not part of the special theory of relativity. That is Einstein’s General Theory of Relativity.


To see the other Super Facts click here