2021-04-20 00:00:00 +0000
An interpersonal perspective on the replacing our jobs with robots
Some people are worried about robots taking our jobs. Not just kind of worried, but really worried that this turning point in our society could be the start to popular dystopian sci-fi. These people aren’t foolish luddites, they’re educated members of academic society who want to contribute to the ongoing vision of our collective future. There is every right for concern, but it doesn’t have to be scary. It’s a change that’s going to happen regardless and we better be ready for it. Good thing humans are one of the most (if not the most) adaptable species on the planet.
Dan Carnegie illustrates the significance of the coming turn of events quite well in How to Win Friends and Influence People. This popular self-help book isn’t about AI, it’s about people. How people work, how we influence each other in our daily encounters. In fact, most of the book is devoted to stories that explain how to make each encounter as influential and productive as possible. We form connections with the people we work with. The mailman, the boss, the custodial staff, we either take these as opportunities to connect or to disconnect; the degree to which has an impact on each individual’s well-being. Carnegie goes on to explain that when we take the time to meet another person with the same kind of interest and enthusiasm about themselves as we do ourselves, we can establish a connection that eventually dissolves any unrest. However, this takes time. In the workplace, productivity often suffers because humans have mental and emotional shortcomings that are exacerbated by poor human-to-human interactions. Certainly replacing our jobs with robots would help here. No more low productivity and nearly optimal output at all times.
But what about the other conditions in which the person doing the bad job was handled well? They learn something about themselves in that moment, don’t they? They learned a little bit about what it means to communicate effectively and connect with others. By replacing us with robots, don’t we take away this opportunity to learn more about interpersonal relationships? Probably so, but there it could actually be a better thing for us. Many famous artists didn’t wish to constrain their artistic freedom and passions with financial burden. Perhaps we could be alleviated the same way with interpersonal relationships? Wouldn’t it be easier to meet each person on their terms if your livelihood didn’t depend on it? Couldn’t we find ways to connect and learn from each other in a free space that was built for just that? Children do it all the time, it’s called play.
2021-04-14 00:00:00 +0000
General Intelligence and Consciousness
That is a relatively simple summary of the overall explanation of recommendation architecture as a theory for higher cognition in the brain. However, it is not yet obvious in this explanation where we get general intelligence and consciousness. Why should we need to pay extra attention to these two features is of special interest. Not only are general intelligence and consciousness the two most sought after functions in artificial intelligence and related computational work, but there is good reason for it. It stands to reason that the implications of computationally instantiated general intelligence would be able to solve many of our toughest problems and more quickly than any human, or perhaps teams of people, could attempt. Having a machine that is capable of lifelong learning has obvious benefits to mankind. In addressing general intelligence of this kind, we also have to consider that instantiation could become superintelligent, in that it could understand information and make decisions in such a capacity that far exceeds any human ability. This type of intelligence would likely be hard to control. In fact, several institutions have dedicated themselves to this potential, known as the control problem.
The rationale behind wanting a conscious machine is less obvious and to some, nonexistent. We are not sure what exactly makes consciousness and therefore we could inadvertently create it. In a similar accord, since we can’t certainly ascertain whether anything besides ourself is conscious, we don’t know completely if general intelligence is separable from consciousness. We could say an ant colony collectively has some level of general intelligence but we cannot preclude the colony from having some level of consciousness. In order to get the level of general intelligence needed to help us with some of our most complex and challenging problems, we may need consciousness to facilitate general intelligence. Another possibility for why consciousness is important has to do with self-awareness and the control problem. It’s not clear whether consciousness is needed for self-awareness. It is likely the case that in order to have a concept of self, one needs to be aware of experiences related to self. This somewhat implies that at least an aspect of functional consciousness is needed here. We could also say that there is an increased likelihood to understand someone else, to be empathetic, the more conscious of ourselves we are. If this is the cae, that more consciousness leads to self-awareness, then consciousness becomes a tool for relating to others. This tool may be valuable in the case for superintelligence. It is something special to be conscious. If the superintelligent machine was also conscious and self-aware, just maybe there is a small amount of kinship that we could share, just on the grounds of being conscious. However, that argument is not widely discussed and still hypothetical. There is also the counter argument that we as humans, perhaps the most self-aware, are the most detrimentafactor to other conscious lifeforms seen to date, with the exception of our pets.
2021-04-08 00:00:00 +0000
Go All the Way — Charles Bukowski
“If you’re going to try, go all the way.
Otherwise, don’t even start.
This could mean losing girlfriends, wives, relatives and maybe even your mind.
It could mean not eating for three or four days.
It could mean freezing on a park bench.
It could mean jail.
It could mean derision.
It could mean mockery — isolation.
Isolation is the gift.
All the others are a test of your endurance, of how much you really want to do it.
And, you’ll do it, despite rejection and the worst odds.
And it will be better than anything else you can imagine.
If you’re going to try, go all the way.
There is no other feeling like that.
You will be alone with the gods, and the nights will flame with fire.
You will ride life straight to perfect laughter.
It’s the only good fight there is.”
2021-04-07 00:00:00 +0000
If neural firing is metabolically expensive and if thoughts are a product of such firing, why do we think all the time?
Is there something to do with the continuous flow of electrical impulses and metabolic constraints?
Optimization of Cortex folding: 3 sheets of paper constrained by need to connect to other regions and thickness of cortex, is the configuration optimal (in terms of connectivity)?
Does a human-level intelligent machine need to be embodied? Conscious?
Can the brain do backpropagation?
Does a AGI need subjective experience?
2021-04-06 00:00:00 +0000
How the recommendation architecture could instantiate general intelligence and consciousness.
Andrew Coward and Thomas Gedeon have proposed a functional architectural theory of the brain, the recommendation architecture. This architecture is a framework for understanding higher cognition in the human brain at various levels of detail. A correct instantiation of this architecture as a computational model should lead to higher cognition behaviors, including general intelligence and some functional aspects of consciousness.
The Recommendation Architecture Coward’s early work, namely Pattern Thinking, eloquently details a beautiful theory of brain functioning. His approach is beautiful in the sense that what is a very complex system suddenly becomes clear and simple upon closer inspection of each level of detail. In his account, an analogue vision of the brain is given; where the information signal is propagating through levels of physical hierarchy, each time physically growing the paths in which the information flows, all the while extracting patterns in a fractal manner. Many times I have tried to come up with a proper metaphor for his explanation, but nothing quite captures the essence of this unified theory of brain activity. Failure to come up with a similar analogue construct is likely due to the uniqueness of the brain and the fact that if something was like the brain in this manner, then it would in fact, be a brain.
However foolish it may be to come up with a good metaphor, I liken the way in which neurons propagate an information signal to pouring water through an empty ant nest. As the water flows down the already carved tunnels of the nest and choose different paths it erodes the tunnels travelled so the next iteration of water pouring is likely to follow a similar pattern of tunnels. However, each time the water is poured, it’s at a slightly different angle, a slightly different amount, it’s never the same pour twice. The water never flows through the ant nest in the same way, it makes use of the bigger tunnels that have most commonly been eroded, but as it starts dispersing more and more, the tunnels being traveled are often new ones. Each time this happens, the information in the water is stored by the degree of erosion over all the tunnels in the nest. Information of the pouring exists as a pattern in the water that is encoded in the eroded tunnels. The more commonly used tunnels are deeply eroded, sometimes so much that they collapse and merge with new tunnels to disperse the water even more. We could also say the tunnels near the bottom of the nest are getting water that has information about what has previously been eroded, for it carries bits of dirt that were eroded from the earlier tunnels. In this case, the later tunnels are chosen based on the overall ability of the water to flow through them, the overall information of the water in that place which is encoding the previous tunnels.
Information in the brain is likely propagated similarly. An information signal flows through the neurons, changing them so they will be more likely to fire in similar conditions; moving information from the input to fractal patterns of representation until the signal recruits the system to do something. In Coward’s and Gedeon’s recommendation architecture, the “do something” part is critically distinct from the pattern extraction hierarchy. In this later work, the idea of pattern extraction is likened to condition detection and definition (CDD), where information (condition) is detected and stored (defined) largely by the cortex. This subsystem is separate from the behavioral recommendation (BR) subsystem, namely the basal ganglia, limbic system, and cerebellum. The distinction between the two is functional and structural. The part of the system that is encoding the information cannot also be deciding how to use that information. The optimization of the CDD or BR is different from the other. The CDD is encoding information and the BR looks to those encodings as recommendations for behavioral decisions. However, the information provided by the CDD is neither categorial features of the condition nor behaviors themselves, they are abstract encodings of the condition.
The BR has to integrate the information given by the CDD in a way that leads to an appropriate behavior. The first obvious implication is that the whole system has to be embodied in an environment that is capable of making behavioral decisions and once enacted, those decisions have repercussions on the system, supported by the embodied cognition theory. However understanding the nature of BR is no trivial task either. There is the basal ganglia which is doing the bulk of the integration work, receiving the CDD from cortex, bias from the limbic system, accessing appropriate behaviors to implement based on past rewarded behavior and selecting a winning behavior output. The behavior is often then enacted by the thalamus, communicating with the brainstem and back to the cortex to encode the behavior as part of the overall CDD. In some cases, a rapid response is needed and some of the circuitry can be skipped, in other cases a sequence of often used behaviors is recruited from the cerebellum. Overall, we once again have a system that is not easily described from one level of detail.
Going back to our ant metaphor, we can describe the BR similarly. If we took a picture of the last layer of tunnels at the bottom of the nest, it would look similar to an abstract painting. Without seeing the rest of the tunnel, if presented with this picture, you would be able to contemplate the origins of the picture, much as you would about the feelings and meaning of an abstract artwork. Except in this case, we can imagine a tiny robot with two legs and a cup with eyes for a head. This little robot has to position itself to catch the water falling out of the nest. It looks at the abstract picture of tunnels and has to use that to determine a behavior. The robot is rewarded for doing it’s job and the only thing it’s capable of doing is moving left or right to catch water. If it doesn’t catch enough water, it won’t be able to pour it back into the ant nest, and thus won’t be able to catch it the following day. The first time the robot looks at the picture, it doesn’t know to move left or right or stay centered, so it makes a guess. If the guess is lucky, the robot is more likely to make that decision again, since it allowed the robot to continue its job. The next time the robot looks at the picture, if it looks similar to the picture as the last time it looked and behaved appropriately, it’s likely to do that behavior again. In the case that it doesn’t look similar, it may decide to try a new behavior. In this way, the robot stumbles about till it can start to decode the abstract pictures into correct behaviors. In the case that it’s very windy, the robot might be swayed to go a certain direction. In the case that the picture looks nearly identical to a picture seen in the past, it can enact its decision rapidly.
In this metaphor, the robot is getting the CDD from the picture of tunnels and deciding what to do based on reinforced pathways that lead to rewarded actions in the past. As an infant, we are subject to this steep learning curve, trying to work out what actions make the best rewards. We also have to figure out which behavioral decisions lead to better information available to the CDD. Once we learn a good sequence of actions and when that sequence is needed (based on CDD), we can store this in the cerebellum and call upon them to move quickly. The amygdala and hypothalamus suggest bias on behaviors based on our body state, much like the wind would bias which direction for the robot to move in. In this way, the information provided by CDD is used as a recommendation to the circuitry involved in behavioral decision making.
2021-04-05 00:00:00 +0000
A note to new students of science. Start by trying to understand what science is rather than understanding your field of study. In doing this, you will likely go down a path of studying great scientists. This will help you figure out your scientific style. Then start studying your interests. Periodically interject that rigorous studying with studying your disinterests. This will likely help you ask better questions regarding your interests. Keep asking questions and when you think you have an answer to a question, as a bigger question or ask the question differently. Keep researching, studying, asking. Don’t let up, you might regret the lost time later. The goal is to eventually understand one thing very well and to do this you may have to understand many other things not very well. Eventually, with your own style, you’ll come upon a good question. Your question to science. Be relentless in uncovering this question. Don’t ignore the signs of the path. Once you find the way, go all the way, in answering this question.
2021-04-02 00:00:00 +0000