Today's article is catalyzed by a fascinating essay on one of my other passions - gaming (video or otherwise)! Kirk McKeand on Fan Nation wrote this insightful piece titled: Could AI destroy video games? (msn.com).
This comes at a time when we are witnessing a lot of hype around AI. I wonder if we have reached the point of genAI, max-hype when Matthew McConaughey is a spokesperson for AI. We've most likely jumped at least one shark. He is part of a salesforce.com ad campaign that compares AI to the wild-west and data as the new "gold" (like the gold rush). But wait, wasn't data supposed to be the new oil? Maybe next year it will be the new Lithium or Neodymium. I dread to think what nonsense we will see in the Super Bowl commercials. By the way, 'jumping the shark' is an oldish cultural reference to an episode of Happy Days. The Fonz. Water skis. Shark. Google it. It's worth it. But I digress.
One nugget in his piece I want to underscore is a point decrying the overuse, misattribution, and confusion about what is (or isn't) AI. This bugs me, too, because our language matters. When a technology is attributed so broadly, the ability to frame it's ethical and practical usage get difficult due to the slipperiness and disconnection between an object and its meaning. This can allow bad actors to promulgate self-serving agendas.
"The waters have become so muddied that you could sell them on eBay as bathwater. AI could be behavior, a world, acting, art, a language model, how a Boston Dynamics robot navigates, and on and on. It’s a tool – or a wide selection of tools – that can be used for good or ill. But again, we live in Hell World (never forget that fact), and everyone’s Dr. Evil."
Granted, some of this delights my curmudgeonly side. 'You kids and your damned Tik-Toks. We had USENET and we're quite capable with 16 bits thank you'. But I think it is important, though, for us to have some consistency in what we call 'stuff'. Especially because, as McKeand reminds us, that these are just tools. If we over-attribute these tools as magical/disruptive/inevitable components of our lives, it can grant more credos to those wishing to unjustly use, control, and profit from them. To this end, McKeand also highlights the ongoing, ethical and legal issues involving copyright abuse and the current-gen framework's ability to abuse creators' output.
Back to gaming - he argues we face a potential future in which our beloved games will converge towards maximum suck because the 'bots will be writing - not just the code - but also the plot progressions, objectives, and challenges. And as we know, most generating* AI regresses to the semantic mean. Now, humans are quite adept at developing and chunking out derivative and 'ick' games (cough-cough Fallout 76 v1) Even buggy and flaky AAA games, however, are reflections of our quirky and idiosyncratic human nature. Left to their own predilections, current generating* AI devs and corporate overlords could engender a reality in which machine-derived content will become the fodder for ongoing training and learning and thus begetting the mediocre future of 'every game is Fortnite'. Take it away, Kirk:
"The problem is the majority of the technological leaps we’ve made have created more jobs. More specialization. This new take on AI is here to replace us. Every job. It’s the self-service checkout for retail and the self-driving car for the taxi. And even if you don’t care about people losing their livelihoods, consider this: it makes worse games. Weaker art, vapid dialogue, monotonous voice acting. It’s low-tier trash for gremlins with no friends. The video game layoffs are already bad, but things could be about to get worse."
Right-on, Right-on (*in my best Matthew McConaughey impersonation).
One last quote to close out my regurgitation of his thought-piece:
"Now, I’m no expert, but this seems bad. Not only are we putting developers out of work, but we’re stealing art and potentially committing crime while we’re at it? Great technology you’ve got there, lads. The most depressing part? There’s no going back. You can’t, as they say, put the Akinator back in the lamp. The best we can hope for is that it turns out to be a fad. That way it can join NFTs and the blockchain in the big bin in the sky. "
I am perhaps more optimistic. Look at how well-intentioned automation has not really worked out as hoped. Self-checkout lanes and self-driving taxis are good reminders that hype, and hope do not a future make, always. Making software systems that interact with the physical world and sociological beings is hard. Unintended consequences and non-linear issues abound. These hard-to-engineer situations gets at a weakness at the dark-heart of currently used LLMs and attention-based BIG ML models. Gary Marcus et al. have, I think, well-articulated these shortcomings. I am amazed by the utility and power of LLMs - am not doubting that. I'm more interested in seeing the ongoing evolution and attention (cough-cough- funding) also goes to computational methods that will advance the TRUE cognitive capabilities of our machines - some mixtures of knowledge-engineering, causal inference/structural modeling, etc.
This brings me to the Luddites. The term and reference to the 19th century labor movement is often colloquially mis-used as when used to describe someone or something that is anti-technology or a simpleton regarding its use. The Luddites were NOT anti-technology nor opposed to the use and mechanization of industrial arts. They WERE opposed to the unchecked use of mechanization by greedy corporate owners and managers to displace skilled artisans and workers. This was/is the labor-driven element of the movement. An often-overlooked aesthetic motivation of the Luddites was their worry that the wholesale mechanization of crafting would lead to suckier goods, products and services. A very nice overview of the Ludds (including - in true, satirical Brit fashion - their flamboyant mascot "General Ludd") in this piece by Richard Conniff in The Smithsonian (publication date of 2011 so it was definitely written and edited by humans): What the Luddites Really Fought Against | History| Smithsonian Magazine. It is good reading and full of present-tense thoughts and fruitful bits for rumination. Like:
"They [Luddites] confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “
Hmmm… doesn't this sound familiar? This is a good admonition to not lose track of or sight of humanism in the face of unabashed, techno-optism. This is one of many reasons Universities need to protect and respect the humanities. But that is fodder for some other time.
"[…] It’s possible to live well with technology—but only if we continually question the ways it shapes our lives. It’s about small things, like now and then cutting the cord, shutting down the smartphone and going out for a walk. But it needs to be about big things, too, like standing up against technologies that put money or convenience above other human values."
Right-on, right-on.
BTW, I do catch the subtle irony that McKeod's post came to my attention via msn.com and whatever cloud-elf MSFT has serving up content to my landing page. I for one welcome our robot overlords! (just a hedge).
*Generating AI - I am using this form intentionally. To the point about the importance of naming stuff, I believe the use of Generative AI as a 'catch-all' to describe any and all types of interactive machine intelligence muddies the waters. I use 'generating' as a way to describe use-cases in which some ML is applied to generate or synthesize content (textual, visual) either in response to or in-line with user guidance/prompts. Generative AI techniques (eg: GANs, attention, Siamese nets, VAEs) can and are used in AI use-cases in which the goal is to detect, classify, or predict.
Kommentare