Interesting discussion. Moving away from coding and text generation for a sec, if look at the broader generative AI, the shift is undeniable
AI image generation tools can now take a fairly generalized, vague prompt and, through occasional bursts of algorithmic "inspiration," produce highly usable assets, AI image-to-image editing is rapidly reaching a point where it's making like Photoshop practically redundant for the average user.
However, AI 3D model generation—specifically in the image-to-3D pipeline—still has a bit of a road ahead. We are still seeing inconsistent outputs resulting in excessively high polygon counts, and the mesh topology definitely still has room for improvement. But realistically? That gap is not huge anymore, and it is shrinking steadily by the day. And of course, the compute and generation costs are only going to keep dropping.
The overarching trend is: if a job merely requires someone to be a basic "tool operator" who applies just a tiny bit of intelligence to execute a mechanical software task, that role is actively being replaced.