Shareholder Value Fixation of AI and Robotics: A Recipe for Failure
by Lynn Parramore | Sep 11, 2023
In the 1970s, economist Milton Friedman, along with economics professors like Michael Jensen, promoted the idea of maximizing shareholder value, basically telling businesses that their sole purpose was to enrich shareholders, period. Never mind offering great products and services, investing in employees, innovating, or doing anything useful to society. By the 1980s, the concept had taken off: CEOs used it to justify practices like stock price manipulation and holding down wages. The result, according to economist William Lazonick, a leading business historian, has been nothing short of disastrous, miring American companies in short-term thinking, creating poor working conditions, delivering shoddy products, and driving income inequality.
Despite the increasing criticism this concept has received in recent years – along with showy renunciations by CEOs — the shareholder value fixation still holds sway across the American business landscape. What happens when new technologies are plugged into a system distorted by this flawed ideology? Nothing good, warns Lazonick. As he sees it, until the idea is relegated to the dustbin of history, every new technology will be used to fulfill its greedy aims rather than benefitting the people who do the work. Or anyone else, for that matter. Lazonick spoke to the Institute for New Economic Thinking about what developments in AI and robotics might mean to American employees.
Lynn Parramore: There’s a lot of talk about what emerging technologies like AI and robotics mean for businesses and employees. What historical context do we need to keep in mind in viewing these developments?
William Lazonick: The issue is whether or not our institutions will focus on upgrading the capabilities of the labor force as technological change occurs. If we don’t want such changes to negatively impact employees, we have to make some major adjustments. We would need to have education more freely available and to figure out some kind of decent transition in terms of early retirement for some workers. Others would need to be retrained. Our society could thank employees for the work they have done and then move to upgrade the next generation. We would need to recognize that a lot of routine work is going to be automated away, and if it isn’t automated away, it’s going to be done by cheaper labor somewhere else. The persistent under-compensation of American workers is a key issue. In a place like the U.S., some businesses might not even opt to use advanced technology because it costs too much money and if you can get workers at $12 an hour, then why bother?
Historically, in the automobile industry, we saw how technological changes impacted Black employees and white blue-collar workers. In the 1980s, it wasn’t automation itself that did them in in the U.S. Rather, it was a change in corporate ideology. Specifically, it was the promotion of shareholder value — an orientation in which everything was about getting the stock price up and funneling money to shareholders. Business owners who adopted that mindset began to see workers as just an impediment to their goal of more profits that could be distributed to shareholders. Businesses found that if they could get rid of unions and find cheaper labor in the South, they could press down wages.
Unfortunately, this is still the regime that’s out there. In the many years I’ve been researching the issue, it hasn’t gotten any better. In fact, it’s gotten worse. Look at the political landscape. Some politicians, like Bernie Sanders, argue in favor of increased accessibility to higher education, of free higher education, which would help the transition to advanced technologies. Well, guess what? We used to have it! But the pernicious mindset that took hold in the 1980s is still with us, which means that politically, the adjustments that we need to make will be very difficult to achieve.
LP: It’s ironic that some of the older politicians who argue that free higher education is impossible likely enjoyed free tuition themselves since most colleges and public universities were free in the U.S. until the mid-1960s.
WL: Yes. Unfortunately, free higher education not only disappeared, but student loan rates became extortionate. We began to behave as if we didn’t actually want people to get an education.
LP: So new technologies are going to be plugged into the system that we have, which does not support a positive transition for workers. I lived in the Czech Republic just after the Iron Curtain fell, and you were hard-pressed to scare up a telephone. The technology, of course, had been around for ages, but the regime hadn’t wanted it for regular citizens. You could clearly see that the structure of a society is directly related to if, how, or when technology is going to advance and how it will be used. The regime, the dominant system, is the issue. Not the technology.
WL: Yes. The screenwriters’ strike has brought this to the fore. You might have thought looking at all the streaming that goes on at Netflix, etc., that there would be plenty of good opportunities for people who want to write screenplays or work as actors. But then you find out that the jobs are offered in a power context where people are pressured to take the job under poor conditions because somebody else will always take it. Even though the demand has expanded, there’s an even bigger supply to fill that demand. Production companies are not necessarily hiring stars who command high salaries. It’s an unequal system, winner-take-all. A few stars earn huge amounts of money but most others get low wages. That’s the world we live in. You can see that in other industries, too. Compared to the rest of the world, the U.S. has a particularly unequal system – it’s structured to press down wages. That tells you what you can expect from AI and robotics.
LP: Are you concerned that these advanced technologies give businesses even more strategies to hold wages down?
WL: Oh, yes. Take academic work, for example. AI could start writing publishable papers. Someone could mine my research and put it out using an AI program.
Technologies that displace the skills of people and put them into machines — which has been going on for centuries — can make life easier, of course. The wheel gives you the ease of rolling heavy things along, but the wheel displaces certain kinds of labor. Often the labor that technology displaces is heavy labor, routine labor. AI is a little different from that – or maybe very different, I’m not sure – because it can displace intellectual labor by using the intellectual database. This changes a lot of things. A script or a press release can be written by AI. Someone might be checking it over to make sure it’s factually correct, but the actual writing of the thing can just be done by AI with input about grammar and so on. Is that inherently bad? No. It depends on what those people who are writing company press releases could do with their writing skills. Is there some other work that they could do that AI couldn’t do? Work where they would have to dig for research and new knowledge? The outcome for people depends on whether, as we take advantage of these technologies, our society makes it a priority to ensure that people have some transition to a situation where they can make a living or upgrade their capabilities. Our society is not currently structured for that.
LP: What happens if we just leave it up to businesses to make the transition?
WL: Where are businesses currently putting their resources? They are putting them into stock buybacks. I just checked Apple, which is the record-breaker in this. They’ve done over $610 billion since October 2012. That’s just one company, but it puts pressure on every other company to try to get their stocks up. We need a shift in which corporations use resources for things like retraining workers. For a company like Apple, all kinds of workers could be out of a job as the databases become deeper and the algorithms become more sophisticated. Instead of doing stock buybacks, companies like Apple should be paying their share of revenues to the government so that we could have society-wide programs to make the transition.
LP: The idea that new technologies might cause some type of job apocalypse is tricky. Those in power can use it to instill fear and get people to accept subpar jobs. Is it a jobs apocalypse we need to be worried about, or is it a scenario in which jobs are going to shift in ways that don’t benefit most workers very much?
WL: In some sense, I think the apocalypse has already happened in that workers are not valued.
It happened 40 years ago. From my point of view, it was absolutely the shift to the idea of maximizing shareholder value – even before people were even using that language. That’s what it was about. That’s what was being pushed. There are a lot of reasons that shift happened, but partly it was because the people who could make a lot of money found that they could just exploit people doing routine work.
Look at the case of robotics over the last decade. The irony is that the leaders in robotics, in both implementing robotics and in producing robotics – the Japanese first, and the Germans second — are societies in which blue-collar workers are more secure. There has been an upgrading of the labor force, and the Japanese and German workers were much more involved in what was happening on the shop floor as part of their quality systems and dealing with production issues. They were talking to engineers in a way they didn’t in the U.S. or Britain. When it came to robotics, workers were imparting their knowledge and they were not afraid of losing their jobs. That gave the Japanese and Germans a great advantage in the technology. With globalization, there’s a question of who is going to make better use of AI as a platform for doing higher quality work.
Another thing — and I’m not exactly sure how it’s going to work out with AI – is that instead of talking about goods and services, a lot of people talk about products and services. Well, that is wrong. A good is something I can give to you. It doesn’t matter that it came from me. I don’t need to have any intervention in your ability to use it.
LP: Like, you give me a hammer, I can just use it right away? It doesn’t matter where it came from?
WL: Right. If it starts breaking apart, you might care about the brand name, but it’s different if using the thing requires an intervention. Say you’re using Apple software. You may have to call Apple to figure out what’s going on. It requires a service. It’s in the interest of companies like Apple to get rid of those services for goods. The software, of course, allows us to do all kinds of stuff that we could never do before in terms of using our computers. For some of us, that’s helpful. That’s the platform on which we work, and for those of us who have enough education, skills, and ambition, we find ways of providing our own labor services in ways that wouldn’t be possible without these goods.
LP: Like blog platforms that gave bloggers a way to express their views in ways that weren’t possible before.
WL. Right. When I started doing research, there was no internet. When I was a graduate student, I was so excited when I found out about White-Out to make corrections! Everything has changed in terms of doing research, of where you have to physically be, in terms of time. It made things easier for academics, but a lot of people got left behind.
Here’s the bottom line. A platform like ChatGPT can turn a service into a good. However, there’s a certain caution because it may not be the same quality and the work may have to be checked. If a qualified person were checking the product rather than a qualified person writing it, then that would be the service. That would be the intervention you would need for the job. You’d need people fact-checking something written by an algorithm, for example. Those kinds of jobs might be created. There are also ways that ChatGPT and similar platforms could allow us to solve problems, but we need people with the capabilities to make use of the platform to do that instead of expecting the technology itself to do it. AI, for example, is being used by pharma companies because there are these huge libraries of experiments out there, and if you’re looking for a drug, you can go and access that database in new ways that are independent of the scientists. But the scientists have to know what they’re looking for and they have to know how to evaluate the experiments for safety, for effectiveness, and so on. There’s an area where we probably can get much more drug innovation if we do this right.
We’ve been doing this on mRNA vaccines, no doubt about it. But it only really works if the power over the research and the medicines is in the hands of the right people — people who actually want to further medicine. If you have highly qualified people who are really interested in the science and know the science and are trained, you can make great leaps forward. AI can be a tool.
LP: Or a cudgel.
WL: Yes. How it’s going to be used depends on the interests and incentives of the people who control it. Anybody who thinks that the interests and incentives of companies in the U.S. right now are to create a more highly qualified labor force and to pay more taxes to the government is not being realistic, frankly.
© 2023 FM Media Enterprises, Ltd.