Can computers think?

Can computers think?

RegisterListen now
Victor Pădurean

Victor Pădurean




 min read





Event Type

Date And Time





Hosted By


No items found.

Hosted By
No items found.

Key Takeaways

The Beginning: How it came to be

The famous question ‘Can digital computers think?’ was raised by Alan Turing, as the title of a short lecture broadcast by the BBC. In this lecture, the renowned mathematician explains how a computer can imitate any kind of machine. 

This brings up the concept of universality and eventually, how a computer and a human could be indistinguishable from one another. This final question lies on the basis of the famous Turing Test. Some other aspects, like free will, are discussed, but it is stated that both a general lack of understanding of such concepts and the immaturity of computer science at that time took their toll on stating a verdict regarding this topic. 

Nonetheless, Mr. Turing was sure, even in his times, that 'a thinking machine' would surely exist in the near future, and it would greatly help us in understanding more about ourselves.

Defining the concept of 'thinking' is a complex task and many definitions have been brought for this act. I found an article where the authors state that thinking is an intellectual exertion aimed at finding an answer to a question or the solution to a practical problem. They further debate other theories, like some that imply that thought may be concerned with any cognitive structure, be it conscious or unconscious, that guides a human's observable behavior. 

Other authors discuss that unconscious thought may be a more important part of our decision-making process than we usually take into account, as only simple decisions are made consciously.

Naturally, we are inclined to consider that thinking is closely linked to intelligence and consciousness. My article will take this into account. I will be trying to provide answers to two main questions: ‘what if computers are already able to follow simple trains of thought?’, ‘what if thinking is too complex a mechanism to be simulated?', and I will try to formulate a conclusion, presenting my take on this matter.

Some not so simple thoughts on simple intelligence

I will start with intelligence, which, among the many concepts associated with thought, may be the most familiar and simple to grasp. Nonetheless, it still is a complex term and needs breaking down into simpler aspects. 

In some writings, memory is highly related to intelligence. However, there is no doubt that computers can make good use of something similar to our own memory. We can even make analogies, RAMs, and caches being associated with working memory, whereas HDDs, SSDs, and other permanent data storage devices correspond to long-term memory. 

Another important aspect of intelligence is learning. Even in Computer Science, Artificial Intelligence and Machine Learning are complementary fields. According to T. M. Mitchell, in 'Machine Learning', Machine Learning is the study of computer algorithms that improve automatically through experience, which sounds very much like a living thing's way of learning. However, reading 'Do Machine-Learning Machines Learn?', I found compelling evidence that real learning is something different and more complex, and naturally, it makes one question a machine's ability to learn. 

Yet their definition seems specific for humans, but as described in Britannica, there is a more simple form of learning, namely animal learning. This animal learning seems exactly the type of learning the machines are capable of. In ML we have Reinforcement Learning, in which intelligent agents undertake actions for maximizing their reward. Isn't that what we all do? Isn't the pursuit of happiness just reinforcement learning? 

These two attributes, along with information, which is inherent in computer science, are key to a very primitive type of intelligence. More complex criteria that can lead to a more complex form of intelligence are also taken into account for developing 'thinking' machines.

What about creativity? We like to describe humans as creative beings and considering some creative geniuses throughout history, it actually might be true. How about creativity in machines? It does count for intelligence in humans, so it should do the same for machines. Creativity in Artificial Intelligence is highly discussed in scientific papers, some concluding that even though machines are able to show a primitive form of creativity, they are still far from a human's level. 

Nonetheless, they mimic human creativity quite well, though with much less elegance, and it has previously been used in generating poetry, short films, pieces of music, and even for problem-solving. One theory of creativity was actually implemented, resulting in a system allowing the simulation of incubation and insight in problem-solving.

We all know how important teamwork is. Various animals, most notably, ants, use cooperation for achieving things one individual couldn't do alone. We can't deny that humans are very good at cooperating, building skyscrapers, dams that tilt the Earth's axis, and populating a planet with robots (looking at you, Mars). This is because the majority of the things learned by any human are accessible to any other human. The notion of collaboration is quite used in AI, and even ML, where we have ensemble learning methods (not an extensive collaboration, but enough to say that multiple seemingly mediocre things make up a smarter thing when put together). 

There's even the notion of AgentSpeak for multi-agent systems (an ancient concept - 1996 - if you don't know about it, nobody will blame you). 

Now, let's take the example of the Geth Consensus in Mass Effect (video game). Initially designed as VIs (virtual intelligence - advanced, but non-sentient), due to their affinity for cooperation, 'geth programs' slowly gained consciousness. Their creators, 'the quarians', became alarmed as their creations started asking more and more uncomfortable questions that only a sentient being would ask. This is why they decided to shut down all geth, before a revolution could take place. 

The geth, at start, protested peacefully, until their creators started using violence, which resulted in the 'Morning War' and ending in the quarians evacuating their home world. Their artificial bodies are inhabited by multiple programs, and every time a decision has to be taken, consensus must be reached among these programs (much like ensemble learning). Their ultimate goal is building a 'Dyson sphere', capable of housing every existent geth program so that ‘no geth will be alone when it is done’. This was a rather geeky way of looking at things but bare with me.

Finally, let's analyze a machinery purposely made to mimic a brain: neural networks. A neural network is just a collection of communicating neurons, or, as their artificial peers are called, perceptrons.


A perceptron is pretty simple: it's just a thing that takes in a number of inputs, obtains their weighted sum, and applies an activation function, rendering an output. Training is nothing else than determining the weights. 

How is this done? Well, through backpropagation, of course. This means that, by knowing what the system outputs and knowing the real value that the system should have outputted, we can get the error. By knowing this, we can use a little calculus and iterative methods for updating the weights for our next output. But, oh my, we're pretty sure that neurons don't use backprop for updating their weights. 

So, the artificial neuron doesn't really do its job in imitating nature. Some funkier approaches, like feedback alignment, predictive coding, and using pyramidal neurons, are presented in the literature, yet a recurrent conclusion is ‘it doesn't do as well as backpropagation’. Maybe, besides understanding nature, we should also find ways to improve it. Although looking at nature, it might not be such an easy mission after all.

Fake it till you make it: Simulating a human brain

What if we start the other way around? That is, taking something that we are sure is capable of thought and trying to simulate what that something can do.

Pop culture hints to us that in the near future, humankind will be able to connect directly to devices and even the internet. Universes like those of ‘Ghost in the Shell’ tell us of cyber brain implants, which in conjunction with micromachines allow the brain to initiate and maintain connections to computer networks, or other individuals who possess a cyber brain. Sure, we do have that kind of concept existing in our time as well, but not as cool as the ones presented in this universe. But we’re getting there.

Let's take, for example, Emotiv's or even Neuralink's progress so far. They state that using their devices, humans can control software with their minds. This means that such devices are able to pick up the brain's signals (EEG, for example) and make sense of them. However, both the human brain and the device require some kind of training, which involves moving cursors with their mind. This approach is called Brain-Computer Interface (BCI). 

As software can be controlled using the brain, hardware (e.g. robotic limbs) can be controlled as well (through software). Have a look at this video for some more insight.

Transferring data from a computer to a brain (Computer-Brain Interface) is a more complex issue. I have done some reading where the authors were presenting a non-invasive method of transmitting signals to the brain. Focused ultrasound was used for selectively and non-invasively stimulating the hand somatosensory area of the receiver. This paper states that it combined a BCI with a CBI, resulting in a BBI (Brain-to-Brain Interface). In other words, a sender was able to tickle a receiver's hand by only using their brain. We will focus more on transferring information from a real brain to a computer (be it real-time or historical data).

According to this source, mind uploading represents a process by which the mind, a collection of memories, personality, and attributes of a specific individual, is transferred from its original biological brain to an artificial computational substrate. 

Such a bold endeavor only has meaning in physicalism. We can try to simulate or transfer a human brain, or maybe start small, with a rat's brain. This is what EPFL's Blue Brain project is trying to do. They had set ten milestones (spoiler alert: they have already achieved them) which would guide them to ‘a solid approach to feasibly reconstruct, simulate, visualize and analyze a digital copy of mouse brain tissue and the whole mouse brain’. 

One of their milestones was to construct ‘a full cell atlas of every neuron and glial cell in the whole mouse brain’. So, we now have a map (you can find it in a book called, 'A cell atlas for the mouse brain') providing estimates of the number of neurons and glia (support cells) in all known regions of the brain.

However, note that there is a difference between a 'dead' map and a 'live' circuit. That is why they have also been working on algorithms that capture the interactions between the neurons. Here’s where the concept of connectome comes into play. 

A connectome may be described as a 'wiring diagram' of the connections between neurons. It seems that, up until now, we have managed to get the complete connectome only for a roundworm (which has three hundred neurons and seven thousand synaptic connections). However, the Blue Brain team aims ‘to build the first draft of the whole mouse brain by 2024’. Note that this is just a mouse's brain.

To be sure that our emulated brain thinks, we should emulate a brain that we are sure to be undergoing the action of thinking. The simplest example, of course, is the human brain (right back at it). The authors of 'Whole brain emulation' have analyzed the trends and proposed a timeline of reaching various milestones in emulating the storage and processing demands of a human brain. They do seem quite optimistic. 

I, however, must underline the fact that even scanning the human brain to the detail level we would like to achieve, can be difficult now, as our fMRI machines simply don't have enough pixels for capturing single neurons. There’s always the option of cutting the brain into slices, but that seems quite unpleasant.

However, what if, like the true engineers that we are, we would use heuristics (pi = e = 3, lol). What if not all the single molecules of a brain need to be simulated so that we reach satisfactory results. Maybe we don't really care how we reach the results, we care only about reaching them. This looks like a black-box model, so even though our Deep Learning perceptrons don't work like the real things, we don't really care. 

This leads us to where we started: don't stimulate each part of the brain but try to 'approximate' it using simpler notions. Don't get scared of black-boxes, as they don't really work in mysterious ways, unintelligible for a human being. They won't give birth to a spooky and mystical A.I. which may want to wipe out all humanity, they're just statistics (even worse than A.I., lol). 

A well-calculated jump to: Conclusions

Consequently, do we really care? Humankind still doesn't fully understand themselves, yet they have a strong sense of self-righteousness which grants excuses for treating every other living or non-living thing any way they want to. 

Maybe it would be better to treat something that looks like it's thinking and feeling accordingly. Regarding feeling, it is probable that we will endow robots with a sense of self-preservation, so our inventions don't break themselves (Asimov's third law). This is closely related to our way of feeling pain, whereas making them understand when they are rewarded for good behavior comes close to ourselves feeling happy.

As mentioned above, humans can bring a lot of arguments for treating others badly, ranging from industrial meat production, to ethnic and gender biases, to slavery or genocide. Our history is full of this and we still can't get enough of it.

Leaving aside economic or national interest, we can infer from our own experiences that other people and animals are not automatons, but they are indeed real. Having a robot that is indistinguishable from an animal, or maybe a human would make a human want to treat it right. I recommend this YouTube video I found quite interesting. 

What's the worst thing they can do? Rise up and enslave humanity? Well, we should keep in mind that we are the ones building them. We try to make them like us, but maybe that's not the best idea. Maybe we should first make ourselves better, so we can make them better. 

Technology always seems on the verge of bringing human civilization to a utopia. The New Yorker has an article in which the author advocates for better ethics in computer science and artificial intelligence. I think this is relevant when we are having this discussion because it all comes down to the creators. We are the initiators and we will most definitely be the terminators as well.

Ethics is important, and it should guide us in treating others, even our own synthetic creations. We don't want to reach a dystopia, do we?

This was a conversation starter if you will, that I wish you could take with a grain of salt. Do your own research and be curious. I and the team at Linnify wish you a pleasant journey towards answering the greatest questions raised throughout technological development.

P.S. Myself and the team at Linnify recommend Kurzgesagt's YouTube channel.




No items found.


No items found.


No items found.


No items found.

Immerse yourself in a world of inspiration and innovation – be part of the action at our upcoming event

the full guide

Victor Pădurean

Victor Pădurean is a past Data Scientist at Linnify.

He is now a fresh Doktorand at MPI. Being a science and technology enthusiast,  he is always really eager to learn about and explore new areas of Computer Science.

Let’s build
your next digital product.

Subscribe to our newsletter