The main future of PC that Microsoft envisions is giving Windows users more flexibility when it comes to interacting with the PC. Instead of using mouse and keyboard, Microsoft is betting big on AI and the recently announced Copilot Voice feature.
In simple terms, it allows anyone, Microsoft says even users with no-Copilot+ PCs, to talk to the AI using a connected microphone. This allows Windows users to use their voice for searching, getting help, or automating tasks, according to Microsoft.
Many news outlet made it appear that Microsoft wants Windows users to use their voice exclusively when they use the PC. This is not the case, but Microsoft believes that voice will play a much larger role in the future.
There are several uncertainties here, largely because Microsoft did not provide many details on the functionality. Tests have to show how well, or not, the voice feature works and what you can do with it.
- Is it just for communicating with the AI via voice?
- Can you use it for other purposes, e.g., dictation?
- What are the privacy implications? Where is the voice data processed? Is it stored? If so, for how long? Who has access to the data?
Who is going to talk to their PC?
Assuming that the feature works well, the question about who is going to use it needs to be answered as well.
Voice interactions can be beneficial in some contexts, for instance if you need to use your hands for something else, or use a fullscreen app and do not want to switch to the text-based prompt.
However, voice does not work well in some contexts. Imagine talking to your Windows PC during your commute, or in an office with other workers sitting nearby.
The idea of a Star Trek-like communication with a computer system works well, if there is only one person talking to it. Now imagine the whole Enterprise-crew talking to the computer at the same time in the command room. That is utter chaos.
So, this voice feature will be used in private for the most part, which excludes some business use. Still, Microsoft says it is another option that Windows users have, and that is fine, provided that you want to communicate with the AI.
What is your take on this? Do you see yourself talking to an AI in the coming years?

“Now imagine the whole Enterprise-crew talking to the computer at the same time in the command room. That is utter chaos.”
Call centers–one can hear the voices in the background all the time. Would be no different. Soundproof office surrounds. It’s doable.
Interesting chat GPT yesterday; it was getting vital information all wrong. I would write the “real” figures based a home site, and ChatGPT would profusely apologize for the mistake.
Agree. It would be like a call center. Only call centers are not soundproofed. I do not know how people can concentrate in that environment.
I see you are not terminally online. Third of Internet was down yesterday. ChatGPT could not do searches on many websites and databases and when AI does not know exact data it is programmed to guess.
Users’ relationship to AI is a topic in itself, scoping voice is a step further, deeper perhaps in that it may increase the perception of AI as being human, now that reports state emotional relationships for the most psychologically vulnerable.
I cannot see myself talking to code, though I do talk to animals and sometimes even to children of Mother Nature. Hence, in my view, the boundary is not human but life, the aliveness of an encounter.
Concerning practical aspects and inherent limits of voice recognition I can imagine improved code allowing AI to filter one’s voice amid others. We’d have a voice signature registered by the AI which would then recognize it. What is impossible? But is all this relevant of intelligence?
I think two major words which are particularly debased are ‘love’ and ‘intelligence’, as well naturally as their opposites. Imagining both tied as in ‘I love AI’ and you have an academic research theme for the coming years undoubtedly.
I talk to AI’s nearly every forking day and I HATE THEM!
You can’t order a damned Dominos pizza without talking to an AI.
I can’t get needed medical testing done because the AI says it’s has called me but the humans have no record of it!
Dutch consumer protection organisation ACM has warned citizens not to rely on AI when searching for info on political parties which could influence upcoming elections due to take place in a few days time on October 29. Apparently, ChatGPT only recommended the PVV which is an extreme right wing political party, or the PVDA which is extreme left. All the other parties of which there are currently 19 were left out of the equation entirely.
Fortunately, the Netherlands continues to be a coalition goverment and although the PVV acquired the largest number of seats during the last election, none of the other parties would work with them because of their extremist views. Consequently, the country has been functioning on a caretaker government for the past 10 months or so. But the latest opinion polls point to the PVV once again outperforming all the other parties which is going to lead to even more chaos if they succeed in dominating the election results once again.
But I think this is one of the problems with AI and until it can be taught human behaviour and how to interact with it, it’s little more than an overgrown encyclopedia with no idea how to think for itself.
A report in The Guardian this morning cites reports of AI refusing to shutdown after completing a task it had been assigned to perform: https://www.theguardian.com/technology/2025/oct/25/ai-models-may-be-developing-their-own-survival-drive-researchers-say
This reminds me of the youtube video called ChaosGPT where the AI was instructed to eliminate the human race, but failed to do so at that time and may now be in a position to complete the task: https://invidious.nerdvpn.de/search?q=g7YJIpkk7KM