Microsoft is desperate with Windows 11 and it could prove disastrous
The company is bringing the ChatGPT AI chatbot to hundreds of millions of desktop taskbars, what could possibly go wrong?
KOSTAS FARKONAS
PublishED: February 28, 2023
It’s no secret that Microsoft, the undisputed leader in certain tech markets and verticals, is also playing catch-up when it comes to certain other products and services, such as Web search (Bing) or the Web browser market (Edge). It would also like to find success with other products or services that haven’t proven all that popular, such as Windows 11.
Any company vying for position in these highly important markets would probably try to make full use of any competitive advantage it might gain, but there are some lines not meant to be crossed — and it appears than Microsoft has now done just that by trying to incorporate “artificial intelligence” nobody asked for into its newest operating system.
ChatGPT in my Windows taskbar, you say?
The Redmond giant announced, via a post in the official Windows blog, that it’s integrating “the new, AI-powered Bing” to Windows 11 soon through an operating system update. For people not already alarmed by this, the “AI-powered Bing” is just Microsoft’s decade-old Web search engine (currently responsible for no more than 3% of all search queries) that recently had OpenAI’s ChatGPT natural language chatbot built into it. This combination was already offered in beta form to anyone interested since the beginning of February, but a number of well-documented, widely-reported problems big and small (from the AI providing clearly erroneous answers to exhibiting bullying or manipulating behavior while chatting) had forced Microsoft to limit its use for beta testers and not release it to the general public.
Well, it now seems that Microsoft — in an effort to promote Windows 11 and the use of Bing as the “go-to” search engine for Windows users — feels no compunction integrating that imperfect, highly controversial chatbot into an operating system used by hundreds of millions of consumers worldwide. The “AI-powered Bing” (in essence Edge constantly connected to the ChatGPT mechanism) will be added to the taskbar of Windows 11 in the form of “a typable Windows search box so that all your search needs are in one easy to find location”.
There’s no mention in Microsoft’s official blog post whether this new function will be optional to use (right now one has to be beta testing the new AI Bing to get the necessary Windows 11 update for the taskbar search box) or if it will eventually be a mandatory feature (we’ll have to assume that it will). There’s also no mention of the type and amount of user data the mechanism itself is collecting (and what it’s doing with it once collected). All the company wants us to know is that this new functionality is Windows 11-exclusive (there’s no technical reason why this couldn’t be done in Windows 10 too) and that by using it “we’ll be more empowered to harness the world’s information”. Right.
Beta AI on such a scale is a recipe for disaster
Under normal circumstances, anything new in tech that would make consumers’ lives easier would be welcome, be it an operating system feature or anything else (God knows Windows 11 is in need of something that would make it a worthwhile upgrade over Windows 10). But this is different because, well, there are two major problems with integrating ChatGPT so tightly to an OS as widely-used as Windows 11 — and those two problems together could spell trouble for a lot of people, something that Microsoft does not seem to either understand or much care about.
Problem number one: ChatGPT — OpenAI’s whole system it’s based on, in fact — is, for lack of a better word, immature. Incomplete. A work in progress. As impressive as it may initially seem, it’s still wildly inaccurate at times, thus untrustworthy. Countless people have published examples of ChatGPT giving totally wrong answers to even simple enough questions, providing “facts” that are actually not true, even making up information as it pleases in weird, fascinating but also troubling, ways. There are whole threads over at Reddit either making fun of “AI Bing” or expressing deep concern over the current state of its ChatGPT core. There’s also any number of stories and articles published already about “AI Bing” and ChatGPT being “tools that have to be handled carefully”, as every bit of information they provide has to be double-checked and verified by a human before actually used.
Which leads to problem number two: as entertaining as it might be for us geeks to make fun of ChatGPT’s current weaknesses, most people are not aware of them. That’s right: mainstream Windows users, let alone the general public, do not know about the limitations of this natural-language model, do not know how inaccurate it can be, do not know that they can’t just trust the information it provides as-is, without verification.
But Microsoft is now putting that innocent-looking search box in front of almost 300.000.000 consumers (by Statcounter’s Windows 11 latest stats) and it’s safe to assume that a very small percentage of those has already used ChatGPT so as to be aware of those limitations. Most people, when typing in a question into that search box, will be expecting an answer based on a reliable source. It is, after all, offered by the operating system itself, so — theoretically speaking — endorsed by Microsoft, no?
So what if someone needs answers to questions e.g. for a school assignment and gets inaccurate information to use? What if that information is to be used e.g. at work, affecting other people too? Or, stretching it for the sake of the argument, what if someone searches for a treatment to a disease and gets an answer that’s not scientifically correct? What if someone searches hastily for something to help with food poisoning, or a snake bite, or a stroke? What if there are dire consequences based on that inaccurate information? Who’s to blame then? Microsoft? OpenAI? Certainly not “AI Bing” or ChatGPT itself, since it won’t even be aware of the fact that it provided an answer that was not correct!
Putting its interests above those of consumers is typical Microsoft
This is serious for another reason as well: up until now, ChatGPT was something one had to seek in order to use. If it was through OpenAI’s website, consumers would have to make an account and log in in order to use it. If it was through “AI Bing”, one had to sign-up for the beta preview. But now ChatGPT will be just a mouse click away for anyone using a Windows 11-based computer, whether he/she is informed about this system’s current state or not. Come March, everybody is supposed to be receiving this functionality as part of that month’s Windows 11 update. Will “AI Bing” be something users can turn off? How many people will know that and how many will actually go ahead and disable it?
Microsoft does not seem overly concerned by any of this — in fact, it is actively pushing for the adoption of “AI Bing” by as many Windows 11 users as possible without even passingly mentioning that this is beta software they’ll be using. Millions and millions of consumers will be practically turned to beta testers without realizing it — something that’s always insulting to watch, even if one chooses to not take any personal data security or privacy concerns into account. Integrating ChatGPT into Windows 11, at this point in time, in order to promote its products and services is an arrogant, irresponsible choice that Microsoft is making at a scale beyond belief and it will be interesting to see how the company plans to handle the inevitable complications.
Regardless of those complications, though, there are moral lines whose crossing simply can’t be justified by executive ambition or market share targets. It makes one wonder whether there’s anyone left in Microsoft’s Redmond headquarters that actually understands that.