The recent releases of GLM-5 and OpenAI GPT-5.3, along with my hands-on experience with the trending “Imouto Monogatari” and a deeper look into AutoGPT — even if some of these are older tech, it was my first time diving in, and it totally blew my mind. After two days of intense exploration into bleeding-edge AI, the unemployment anxiety hit me again.
Generative AI Development
I’d used ChatGPT’s deep search and similar features before, but the first time I tried Gemini Deep Research, it hit me: if this content came from real papers, it’s essentially a review paper ready for publication. I felt a bit stunned because it seemed like my academic training for an undergrad thesis was almost meaningless against AI.
And that’s just Deep Research. I can’t even imagine how wild Gemini Deep Think mode will be, or if its generated articles are already genuinely suitable for research.
The Shift to Autonomous Agents
AutoGPT’s concept also made me realize AI isn’t just about conversation. Letting AI self-iterate until it achieves its goal might be the ultimate path forward.
When OpenClaw blew up, I wondered what kind of prompt could achieve its vision. Now I just feel my thinking was too conservative. Why not let the AI generate its own prompts? It’s like training a model, except the AI judges and optimizes the loss function itself.
And playing “Imouto Monogatari” further stunned me: with custom prompts, you can create real-time, memory-enabled character interactions. This indirectly validates OpenClaw’s technical feasibility as a personal assistant. Also, perhaps the “fully generative visual novel” is just a compute cost problem now.
SOTA Status
But let’s be real, how far has AI actually come? From the articles Gemini researches, current AI isn’t going full AGI. Instead, it’s specializing. Here are the main trends:
- Claude: Code engineering prowess
- ChatGPT: Deep reasoning and self-validation
- Gemini: Multimodal perception and super-long memory
- Copilot: Primarily black-box integration
Bottom line, it’s no exaggeration: current AIs aren’t just generation tools anymore. They’re starting to feel like digital employees. They get a task, analyze it independently, generate prompts, set completion criteria, and iterate until they deliver. This whole process is essentially a boss (human) assigning work to a subordinate (AI) and then reviewing the final output, right?
The Meaning of Humans
However, just like when AI first emerged, people compared it to the Industrial Revolution: machines replaced manual laborers, causing some job losses, but also created tons of new jobs and improved working conditions. AI’s development will eventually create new job roles too.
Also, while AI wields unimaginable power, it still needs “initiators” and “reviewers.” AI’s operation still depends on human-set “goals” and “intentions.” Human creativity seems to be something current AI architectures can’t learn via gradient descent.
But as someone living through this, I’m still pessimistic. If humanity’s only role is coming up with ideas, how many “creativity-needed” jobs will there truly be in the world? When execution costs plummet, where do most human employees, who are just “executors,” find their value anchor?