Dr. Massimo Mazzotti of The University of Berkeley – Algorithmic Life
Massimo Mazzotti’s research interests lie at the intersection of the history of science and science studies. He is especially interested in the historicity and situatedness of mathematics, logic, and deductive reasoning, and in the social processes that can make them universally valid. He is also interested in using technological systems and artifacts as ways of entry for the explorations of specific forms of social organization and power distribution. His past and present research projects have focused primarily on the early modern and Enlightenment period, with significant incursions into the nineteenth and twentieth century.
Listen to the Podcast
In this special episode Dr. Mazzotti talks about Algorithmic Life and:
- The changing definition of intelligence
- Political affects of algorithms
- An historical perspective on AI
- The Clock as technical disruption
Read the Transcript
John Wall: Hello and welcome to Stack & Flow. I’m John Wall.
Sean Zinsmeister: And I’m Sean Zinsmeister.
John: Today’s guest is historian and sociologist of science at UCal Berkeley, Dr. Massimo Mazzotti. He’s here to talk to us about his latest article on the Algorithmic Life. Dr. Mazzotti, welcome to the show.
Dr. Massimo Mazzotti: Thank you. Thank you for having me over.
John: Sean, you had sent me this article a couple weeks back and we were just amazed by how deep he went into a bunch of the things that we’ve been talking about in the show recently. I’ll handed it off to you. When you picked this up, what was it that struck you to setup this discussion for today?
Sean: We spend a lot of time on the show talking about artificial intelligence and algorithms and a lot of the innovations in how they have their intersection into our daily lives is go to market professionals with sales and marketing. I always think that in order to really understand what’s going on, I think it’s really important to look at the historical perspective of how we got here and sort of what it could mean for the future.
I wanted to have another cross-intersection from not just an academic perspective, but also from historical perspective which I think that Dr. Mazzotti’s piece, Algorithmic Life really helped understand. It’s going to help us broaden the discussion, talk a little bit more about the sociology. Things like that. Dr. Mazzotti, I wanted to start off with just an easy background question which is what got you interested in looking at algorithms? How did you end up pursuing that line of thinking?
Dr. Mazzotti: I was very much interested in engineering and mathematics as a student and in graduate school, but somehow I was asking questions that at some point I realized that couldn’t be easily answered within the boundaries of those disciplines. Like for example, why ultimately do we trust technology or mathematics or logic. Why do they change in time and how do they change in time? What is really local and what is universal in these techniques, mental techniques, material techniques?
Questions about rationality. Is there only one way to think about logic mathematics, technology algorithms or are there possible alternative rationalities? In a way, to address these kind of questions, you have to move beyond the technical disciplines. You have to look at social sciences. You have to look at anthropology, sociology. I think particularly history, because that’s what gives you the perspective as you just said. To put these things in a perspective and see the long term processes that shape our contemporary technical environment.
That, for me, is an important methodological principle that you can use if you want to open it up for discussion and to understand why really it took the shape that it has. Where does it come from? From that point of view, coming to algorithms was, for me, just almost a logical ending in my search for where does technology come from and why does it take those particular shapes and not others.
Sean: When you started diving into the history, I think it’s important sort of build a foundation here which is how are you defining an algorithm. It’s something that I think in the technology sphere gets thrown around quite a bit into a messy definition. How are you defining algorithm before launching into your piece?
Dr. Mazzotti: That’s a good question, because in a way, my piece in part is very much about this very question and the way we talk about algorithm. It’s very much about modes of speaking about algorithm. What we can learn, even just by listening to the way we speak about them. I think that’s a very useful exercise. You will realize that there’s been a major transformation in the way we use this work in the last 15, 20 years.
When I was a graduate student in History of Science, algorithm for me was just a set of instructions. It’s just a recipe. You need to solve the problem. You get the algorithm that has been designed to do that in the best possible way. You apply it and you have the result and hopefully it will work. That’s the engineering view of an algorithm. It’s just a recipe to do something.
This is not particular glamorous. It’s not particularly threatening or visionary. It’s a fairly pragmatic sense of what an algorithm might be like. It goes back in history. The very word came from Arabic and it has to do with algebra. It has to do with algebraic problem solving techniques. We know that algebra is a very powerful way of talking about things because it’s abstract. It doesn’t depend on the knowledge of the objects that we’re manipulating.
We can have abstract from the precise qualities and just thinking in an algebraic way about whatever action we’re actually doing. That’s, of course, not what we mean today when we talk about algorithm. It’s something as being happening. The meaning as being completely transformed. It has expanded in an amazing way. Now, we refer to a lot of different things at the same time and somehow in the contradictory way.
We refer to instructions, but we also refer to code, to software. We refer to software that is running on a machine. Also to the effects of the machine on other systems. Humans and mechanical. This is what we mean when we say, for example, the algorithm is driving the car. That’s essentially, we are saying that there’s a complex technical system that starts from a set of instructions but there’s much more to that.
Then we can give agency in a way to algorithm. Whilst we understand them in this way, we can attribute agency to them. That’s why what you hear … Algorithmic talk these days is so fascinating, because algorithms because agents. They can do things. This has to do with transformation also in our concepts and in our language and in the way which refer to it.
Sean: You mentioned in your work that there’s a political dimension to algorithms and something that I think about here is that these algorithms are being born from human creativity. There are humans who are creating these algorithms to set them into place to drive automation. Is it the human assistance in the creation that drives that political dimension that you were referring to? What do you think they are?
Dr. Mazzotti: Yeah. That’s one of the basic questions here I think. I’m using the term political here, of course, not to mean policy. It’s politics in a deeper sense. It has to do with the way we live together as a collective. What kind of society we are. What kind of society we want to be in the future. I refer to politics in that sense like the art of living together. Organizing our life in a particular form that is different from other forms from the past or from other forms in different parts of the world.
When you think about politics in that term, it’s obvious that algorithms have a political dimension, at least in two main senses. One that is more intuitive, they have effects that are political. If jobs are raised, that is a very political effect and it will change our society. It will change the distribution of wealth, for example.
There’s also another sense in which they’re political which is what you were referring to a second ago which is they don’t come down from the sky as ready-made artifacts. All the technology before them, they have been designed with certain assumptions, with certain ambitions, with certain ideas of what they might be useful for. That is something that we often tend to forget.
There is all this human creativity, this human intentionality, this human dreams maybe that go into an algorithm. Somehow, they are embedded in it. Even if we forget about them, they will be part of the reality that they make for us. That’s why it’s better to be aware as much as possible of the process through which algorithms are produced because, in a way, the story of their production is also telling you about what they will be like. What kind of world that will make for us.
Sean: In that, you also touched on these social realities as well. Something that it makes me think about is living in an era of social media in which the feeds that we see are very much algorithmically crafted if you were which then creates that sense of reality for the individual. Is there something that we can tell from history and looking at those social realities of how algorithms have played in history that we might be able to lend ourselves? Are there dangers that we should be looking out for when it comes to social realities?
Dr. Mazzotti: The important thing I think to keep in mind is they’re not neutral. We often tend to think of what is technology or is logic is mathematics. It’s just a neutral tool. I think these tools are never neutral. Not in the sense that they are necessarily good or bad, but they somehow play a role in shaping our life. They modify the way we live. Or maybe they reinforce certain aspects of our life and they just make it more stable.
They always have an effect. Once you realize that, then the question is try to be aware of what the effects that they’re having. Don’t just take them for granted. Don’t just assume that because this is an automated decision making process, it’s going to be free of bias, for example. Bias might well be built into the very design of the algorithm. Or it might even be built into the dataset that the algorithm is mining.
This is some interesting cases that happened recently. You might have heard about them. Algorithm modified by machine learning so that you would assume they’re actually … They’re actually becoming more and more accurate. At the same time, they were picking up bias in terms for example of profiling clients based on race or gender.
The interesting thing is this is not something that they were designed to do but they picked it up from the dataset. Somehow, past practices had been traced into the dataset and the algorithm has somehow magnified them. The idea that because it’s an automated decision system therefore there’s no bias here might be an illusion and might be a dangerous illusion, because we might not be aware of things that are actually going on.
In that sense, going back to your question, yes, I think once you become aware of this, you might be seeing that in many ways algorithms are modifying or playing a role in things like, for example, an electoral campaign. This is not completely new in the history of technology to be honest, because there’s some beautiful case studies about the role of radio. The very medium that we are using right now in the 1930s in shaping political life for good or for worse.
There is actually a very nice book about the way in which Mussolini’s charisma was very much built on this new fantastic medium that was radio in the 1920s. That without hearing his voice through the radio, it would have been much more difficult for him to build a certain kind of image and public persona and to be successful as he was in the early 1920s with the Italian public.
That’s not something that was built into the radio, into the design of the radio as a technical artifact, but it was in effect. It was an unintended effect of that particular technology. In that sense, it played a role in political life that was actually very significant.
Sean: Well, you talk about technological artifacts quite a bit in this piece and I really love the analogy that you drew around clocks and how you relate it back to the mechanization of society. Could you talk to us a little bit about what you were talking about in terms of clocks? I wanted to explore what that mechanization looked like. Are there dangers of algorithms being the modern day "what clocks were in the industrial revolution" as it were? Any dangers of over-mechanization that we should be looking out for when it comes to algorithms?
Dr. Mazzotti: Yeah. Actually, one of the things I hope as a historian I can contribute here is to think about artifacts that in the past have been playing similar roles to algorithms today. That’s something I think we need to do more because we tend to be very much focused on the now when we talk about these kinds of things as if everything is completely new and everything has never been seen before.
In fact, there is a lot of stuff that is new I think like to the process of technological innovation that we are seeing now. Many aspects of it have already been seen. One is the trend towards mechanization and automation and that has been going on for centuries. In a way, it’s one of the features of modernity as we know it.
In particular, the example of the clock, from me, it was revealing because I thought about the way in which the clock was the basic metaphor that is used by everyone in the 17th century including Newton to think about the universe. The universe for them is a clock. It’s a very powerful metaphor because once you start thinking about the universe as a clock, it means that you can find very precise laws that will tell you about how it works.
There is a logic behind it. This logic is essentially mechanical. The universe, you can also think, is made of elementary particles and mechanical forces. That’s all there is. This is a very powerful metaphor that is used to guide scientific change and technological innovation, but at the same time, the clock is a real artifact and is doing things. It’s not just a tool for helping us think about reality, it’s also for modifying reality.
What the clock was doing in those days is to cutting time up in a way that has never been done before. You can really break it down to smaller units, be very precise, very accurate in fact about it and you can put a price for example on units of time. You can use time to reorganize production. You can use the precision clock to reorganization production in a way that was simply impossible heretofore.
In this sense, you would not have the industrial revolution without the precision clock, because you need to find out how to rationalize the production process. Then you can make a lot of other things with clocks. For example, precision clocks are essential for navigation. Just the large colonial empires of the early modern period that could not exist without clocks that used to find the position of the ship at sea and to be precise when it comes to routes and reliability. Reliability of navigation.
Essentially, what you’re saying is that the clock, if you take all this into account, is that the clock is one of the basic artifacts that have produced the world we live in. In that sense, I see an analogy with algorithms to date, because they are doing a lot of things. They are changing the way we live according to algorithmic logic that is different from human logic.
At the same time, we also keep using them as metaphors or figures of speech as a way of saying something about our fears for example. Autonomous technology. We fear the machines. This is like a common theme in science fiction. There is something that has to do with our fear of autonomous technology and machines becoming independent.
Algorithmic talk right now is also doing this for us. It’s helping us to think how our future could be like if it becomes increasingly algorithmic. Some of us might think that this is actually a dream, it could be the best possible thing and automation will take us to some kind of earthly paradise.
Other people might see that this is taking us towards dystopia. It’s a dystopian future in which we will lose essentially meaning. We will lose human meaning in a life that is essentially run by machines. I see like in the case of the clock … Even today, we have some kind of artifact that’s really captured our imagination. That we see somehow everywhere that we use in every day speech and that we use to think about ourselves, about our future and about our life could be like.
John: That really opens up a lot of possibilities, but there is one thing that I wanted to ask you about that ties into this is that for all these previous discoveries and technologies that have changed the world, you could open these devices up and see how they work and everyone could understand what was there. With algorithms, especially in commercial applications, maybe nobody knows the exact set of rules and what biases and how it all works. It’s literally something that’s been kept secret and doesn’t get shared. How does that change things? What effects might that have because some of these things are so hidden and kept locked down?
Dr. Mazzotti: In fact, this is one of the things that I think are very interesting right now. This is probably one thing that is, in part, in continuity with past technologies, but in part is it has new aspect. What you are referring to is some kind of opacity that algorithms seems to have compared to other technologies that, as you were saying, we can just open up, we can see how they work. We can see the inner, understand the inner workings of these artifacts.
That’s not so easy with an algorithm, first of all, because as we are saying before, it’s even difficult to say what we are talking exactly about when we’re talking about an algorithm. We are talking about a lot of different things, so that’s already makes it complicated to say something precise about the possible effects.
An algorithm, as we were saying is a set of instructions trend into code that is running on a machine, that is having effects on other systems. All this is a lot of things to keep into account when you start thinking about what are some possible effects. There is a pragmatic aspect of the … It’s difficult. To visualize an algorithm is difficult to see easily the possible effects of its actions.
There’s more than that. Of course, particularly in the corporate world, there are issues of secrecy. Algorithm, as we all know, very often are the result of the investments and you just simply don’t want them to be easily inspectable. You just don’t want them to be easily understandable by some outsider. There is a question of secrecy and there’s now a lot of work about we should try to frame this in a legal discourse and try to make corporations, for example, or governments or agencies of all kinds accountable for the algorithm that they design and use.
Then there is something else which I think is even more profound and interesting from a philosophical point of view which is a problem of understanding how algorithms work that came from the very structure that come from the very design so to speak. This is particular true of algorithms that are modified by machine learning. The machine learning transformations that they might go through makes them somehow less and less perspicuous to a human being.
Of course, they don’t have to be perspicuous to a human being to function, so that’s not a priority anymore if it’s a machine that is handling them and modifying them. This is a fact, and if you push that process, you might get to a point in which you have an algorithm that is making decisions and you don’t know exactly why. For example a client is classified as credit worthy or not credit worthy by a bank and that decision is …
You cannot actually quite explain that decision, because somehow it’s taken by the algorithm. The logic that the algorithm follow is not a simple logic that would be perspicuous to a human. It’s the outcome of a series of automated processes and mixing up of different parameters to a scale that is impossible for a human to comprehend.
In this case, you have a machine that might become very accurate at doing what it’s supposed to do, but you’re losing the understanding of why those decisions are made. That’s an important trade off that we’re dealing with and we have to make decisions. How much do we want to gain in terms of power and how much do we want to give up in terms of understanding.
Sean: You sort of are leading right into my next question which brings it to thinking about how this is shaping how you think about artificial intelligence. I have to highlight that particular word around intelligence where there’s this two sides of algorithms where there’s the thinking and the doing part. We could certainly argue with algorithms do too much where humans are giving up perhaps more power than they should. How should we be thinking about defining intelligence in this new world? I’m curious how your thinking has shaped there in terms of AI.
Dr. Mazzotti: Yeah, I think that’s another area in which as a historian, I’m very intrigued by the way we keep changing our own definitions of what intelligence is through history. There is a very interesting interaction between the way we define intelligence and the machines that we are using. In a way is as if our understanding of intelligence, our definitions of intelligence are also determined by the artifacts that we are able to produce.
There is an intrinsic connection between what we think is specific of human intelligence and the things that we can do through technology. Just to give you an example, if you go back in time to the Renaissance, just before the invention of the printing press, memory is taken to be one of the highest signs of intelligence in a human being. Memory is very highly valued. Is a very important skill.
Politicians or literati, as a common practice, they would just memorize very long speeches. There are people who are able to memorize books, entire books which, of course, exist only in a manuscript form at that point. The printing press would transform the world as we know, but one important effect that it has is that it will make memory a much less relevant skill.
Now you have the book. The book is your memory. The book is a materialized form. Is a very interesting technology first of all which is still around and is an externalization of your memory. You don’t need to have those skills anymore. In a fairly relatively short amount of time, memory can decline as something that signify intelligence. It just becomes a mechanical capacity. It just becomes something that an object can do for us.
We have delegated to the book something that we thought was a crucial element, a crucial component of human intelligence. At that point, we value it less because it has been delegated to an object. There is this interesting interplay between what we think is intelligence and what we can delegate to machines or objects.
If you keep moving in history and now the skill that is very closely connected to intelligence is numeracy and counting and the ability of doing mathematics with your hand without using any tool. That is another skill that is highly valued at least up to the beginning of the 19th century which is pretty much when we begin to make machines that can count.
Think about Babbage, the mechanical calculator. That’s the time in which the ability to make calculation for individuals start to decline as another signifier of human intelligence. Now, we know that a machine can do that. It’s something which can delegate the machine.
You can see now many other cases like some are closer to us like for example playing chess. Playing chess in many cultures was taken to be the quintessential sign of human intelligence. Now, we know that computers are actually much better than us at playing chess and it’s not a coincidence that that is not taken to be like the pinnacle of human intelligence as it was before the era of electronic computers.
Essentially, what we’re saying is there’s this continuous push of what we take to be intelligence. It has to do with the boundary of what can be mechanized or not. Is there something that we will not be able to mechanize? I think this is one way of looking about the artificial intelligence today.
Some sociologists think there is. They think that there is some form of tacit knowledge that we might not even know we posses. That is very difficult to mechanize and possibly impossible. You will know about the famous examples about riding a bike. How can you tell somebody how to ride a bike if they don’t know how to. That is a very, very difficult task. Is something that is very difficult to verbalize and therefore is very difficult to turn into a program that you can run on a machine.
This was a quite convincing example I think, but only if you are still thinking about artificial intelligence in the traditional, the standard sense. This kind of top-down programming approach, by logical approach to artificial intelligence.
Now, machine learning is another way of thinking about artificial intelligence. Artificial intelligence is being approached in a bottom-up way and that is changing things. I think it’s changing things in very interesting way. There might still be something that is very difficult to see how it could be mechanized.
For example, just think about running into somebody you know in the street. You just say hi to this person or you came up with the most appropriate way of greeting this person. That is a very complex task. It requires a lot of knowledge about contingent issues, the conditions of the encounter, your particular connection to that person. What will be the most appropriate way in that situation to recognize the presence of the other person and so on.
This apparently simple thing that we do as social beings for us is very normal to do that. We don’t even think about it. To imagine that the machine can do that, that’s a very difficult task. Right now, I wonder what is happening in artificial intelligence might give us the possibility, at least, to think that we can socialize a machine to a point that they will be able to perform a task of that kind. Something that was essentially impossible to think in the previous paradigm of artificial intelligence.
John: That’s a lot to think about. We appreciate you coming on and telling us about your paper and everything that’s going on in this field. It is very interesting to get a view back into history and to measure the past on that. It’s a perspective that we don’t often get because we deal a lot with technology vendors and we tend to focus just on today and don’t look back at the rest. If people would like to learn more about your work and, of course, we’ll have links to this article so people can check that out, but what’s another way for people to follow up and find out more about what you’re doing?
Dr. Mazzotti: We have a project at Berkeley right now on algorithms and particularly, the social impact and the social dimensions of them. We are coming out with at least two special numbers of journals. One is Big Data and Society. The other is Representations. Possibly at the end of this year, they will be out. These are workshops and conferences that we had in the past year. We will go on producing other publications in the next year or two because it’s a long term process. Just keep an eye on Berkeley and you get more of this.
John: That sounds great. Yes, we’ll definitely keep an eye on that and we can push that out as soon as stuff drops so we can keep the audience in line with that. Sean, how about for you? What else have you got going on and to follow up on this stuff for you.
Sean: I just had a piece come out recently on CMS Wire actually talking about machine learning and some of the innovations for executives to be looking out for there and how they should be thinking strategically about it. I also have the AI 101 series that I’ve been working on over at the martechseries.com. We’re breaking down things bit by bit for things that they can use. Just Google Sean Zinsmeister and you can see some of the latest stuff that I’ve been writing over there. Of course, just to head over to infer.com and you could see everything that is new and ready to be devoured over there as well.
John: All right, that will do it for us. Thanks for listening and we will see you in the stacks.
April 12, 2017
- Estimated Time: