Cory Doctorow is a well-known science fiction author an visionary writer. In his essay linked below, he makes his case for how AI as we know it is on track to create an economic tech bubble that is likely to burst with some bad consequences. So far, it has many hallmarks of Tulip Mania, and we just saw a big bubble burst with NFTs (non-fungible tokens).
➤ Read Cory Doctorow's Essay, "What Kind of Bubble is AI?" at Locus' site
I'm inclined to agree that there will be a contraction. First off, "AI" as we know it, is not a mature technology. It was released experimentally with many known flaws and was quickly monetized. It faces many legal challenges for infringing on copyrighted and protected materials, as well as issues with privacy, security, equity, and biases.
Early adopters have quickly put it to use in many "get rich quick schemes", and corporations are looking at cutting costs and see this as a solution to one of the greatest resource costs: people. After all, if you can employ a technology that can work 24-7, doesn't require breaks, lunches, or sleep, doesn't need benefits, or have vacation time, and delivers a result of work that is many factors cheaper than a person...they see raising profits and stock/shareholder value and dollar signs spinning in their brains.
However, there are a lot of fact-based problems, "hallucinations", and the potential for dangerous consequences to businesses with an unproven technology. It is a "Wild West" and certainly potentially disruptive to economies and cultures. Many companies and organizations have jumped on the AI bandwagon merely out of FOMO without complete consideration of the consequences.
I believe that there is some serious rectification to come. There will be big privacy and security problems from the technology...not to mention very public and scary social engineering through deepfakes. There will likely be implementations and "solutions" built from it that will cause huge losses and business disruption that could've been prevented if the companies and organizations held onto the people who were displaced by AI, who understand the underlying issues. They would've helped them to take a more gradual and deliberate approach. That is not to say there aren't companies and organizations already adopting a more cautious strategy.
The outcome of these problems will likely result in a contraction of AI. Those who jumped on the bandwagon will quickly abandon it. A more cautious, security-driven, moral, and ethical approach to exploring and understanding the technology will then be possible. Sadly, a lot of harm is likely to come before as well as when this happens. In the long run, we'll sort out many of the issues and pursue AI in a manner that will have more safeguards and a world that is more prepared to deal with the consequences.