I've been reading a lot about things like AI, mass surveillance, changes to social media algorithms, etc. lately and it got me thinking:
Have developments in information technology reached a point where they are no longer improving society and are instead largely harming it?
-
I grew up alongside the internet. When I was a kid, my computer was so slow that I turned it on when I came home from school to give it time to boot up while I did other things. When Youtube became a thing bandwidth was slow enough that I had to do something similar with loading up videos I wanted to watch ahead of time. Over time, improvements to bandwidth and data transfer protocols have enabled us to go from just being able to send numbers and text to being able to send high resolution pictures, video, audio, and even data necessary to update the gameplay of an online game in real time. At some point in the last few years, this got good enough to do everything I wanted at the speed I wanted and I haven't really had much in the way of bottlenecks or slowdowns since then outside of some very specific tech issues.
-
I went from having something that just made phone calls to having a miniature computer in my pocket that can do all of the above about as well as my dedicated computer.
-
Media editing software has become so widely accessible that ANYONE can participate in generating culture and sharing it with the world.
-
Search and recommendation algorithms got good enough at some point that it made it possible for people to effectively comb through this new massive ocean of data.
And then.... what kinds of new technology has been developed or improved in the last few years? Algorithms have been made worse by being optimized around advertising, data collection, and other business interests. The availability of AI has led to a deluge of garbage gunking up the web and has made misinformation commonplace and hard to ignore. Mass surveillance has become more widespread and advanced. etc. It feels like all our recent and ongoing advancements have been net negatives for society outside of serving the interests of a handful of capitalists. So many of the brightest minds of our time are working on things that don't help anyone.
So what do you think? When was the last innovation (in internet technology, obviously we've had advances in medicines and things like that.) you'd consider to be good for us? Are there any promising lines of work being done today that you believe will lead us into a better future for the internet? Or are you pessimistic about it?
In a broad sense, I don't agree with the premise that technology is always good and it's about how society chooses to use it.
Technology enables people to do things that previously weren't possible. It gives people powers that those who don't/can't use the technology don't have. It fundamentally changes the power dynamics between people. You don't get to choose how someone else uses the technology. You have to deal with its existence.
For example, guns. Guns are a weapon that enables people to inflict violence on others very effectively without much if any athletic prowess. Previously someone who was more athletic could have power over someone weaker than them. With guns, the weaker person could be on an even playing field.
Now, guns are pretty difficult to manufacture, so an authority might be able to effectively control the availability of guns. But now lets say someone discovers a method that enabled basically anyone to make a gun cheaply in their house. Now it's harder to stop people from getting them. It becomes more accessible, and once again this changes the potential power dynamic in society. We could all come to an agreement on how we want to use guns, but that doesn't really matter if some guy can secretly build a gun in his garage, put it in his pocket, and just go shoot someone. The very existence of this technology has changed the nature of social reality.
Now compare that to AI. Generative AI has enabled people to produce novel media that is becoming increasingly difficult to distinguish from authentically generated media very quickly. While this is technically something that was possible before, it was far more difficult and slow. There is media in the world today that could not have existed without AI. (If only in so far as the quantity being larger means that things that wouldn't have been made in the same time period now can be.) AI isn't even a physical device. A computer program is essentially an idea translated into a language a computer can understand. It might be difficult to learn how to program, but anyone with a computer can do it. Anyone can learn how to write a computer virus, so now we have to live in a world where we all have to be careful of viruses. Anti-virus software changes that dynamic again, but it hasn't changed the fact that someone can learn to write a program that gets around them. Now, AI as it works now is a bit harder to make on your own with just knowledge because it requires large quantities of data to train the models. So technologies and policies that could restrict people's access to data could limit the availability of AI technology. But future developments may discover ways to make AI models with little or no data, at which point it would become easy for anyone to have that technology. So even if right now we put in place laws to restrict how AI companies operate so people don't have easy access to the AI models or perhaps the AI models come built with logic that helps to identify their outputs, those laws would be meaningless if it were trivial for anyone to make their own.
Now, it's going to be different for every different kind of technology and it's interesting to discuss, but the root of any human decisions around the technology is the fundamental nature of what the technology is, does, and enables.