The Keynote That Shattered AI Myths in Manila

It was supposed to be a coronation of machine supremacy. Instead, it became a confrontation.

MANILA — At the heart of the Philippines’ premier university, delegates from NUS, Kyoto, HKUST, and AIM assembled to explore the future of investing through algorithms.

They expected Plazo to hand them a blueprint to machine-driven wealth.
They were wrong.

---

### When a Maverick Started with a Paradox

Some call him the architect of near-perfect trading machines.

So when he took the stage, the room went still.

“AI can beat the market. But only if you teach it when not to try.”

The note-taking paused.

That sentence wasn’t just provocative—it was prophetic.

---

### What Followed Was Not a Pitch, But a Meditation

There were no demos, no dashboards, no datasets.
He displayed machine misfires— bots confused by sarcasm, making billion-dollar errors in milliseconds.

“Most AI is trained on yesterday. Investing happens tomorrow.”

Then, with a pause that felt like a punch, he asked:

“Can your AI feel the fear of 2008? Not the charts. The *emotion*.”

You could hear a breath fall.

---

### But What About Conviction?

Of course, the audience pushed back.

A PhD student from Kyoto noted how large language models now detect emotion in text.

Plazo nodded. “Feeling isn’t forecasting.”

A data scientist from HKUST proposed that probabilistic models could one day simulate conviction.

Plazo’s reply was metaphorical:
“You can simulate weather. But conviction? That’s lightning. You can’t forecast where it’ll strike. Only feel when it does.”

---

### When Faith Replaces Thinking

Plazo’s core argument wasn’t that AI is broken. It’s that humans are outsourcing responsibility.

“Some traders no longer read. No longer think. They just wait for signals.”

Still, he clarified: AI belongs in the cockpit—not in the captain’s seat.

His company’s systems scan sentiment, order flow, and liquidity.
“But every output is double-checked by human eyes.”

He paused, then delivered the future’s scariest phrase:
“‘The model told me to do it.’ That’s what we’ll hear after every disaster in the next decade.”

---

### Why This Message Stung Harder in the East

In Asia, automation is often sacred.

Dr. Anton Leung, a Singapore-based ethicist, whispered after the talk:
“This was theology, not technology.”

That afternoon, over tea and tension, Plazo Joseph Plazo pressed the point:

“Don’t just teach students to *code* AI. Teach them to *think* with it.”

---

### The Closing Words That Didn’t Feel Like Tech

The final minutes felt more like poetry than programming.

“The market isn’t math,” he said. “It’s a novel. And if your AI can’t read character, it’ll miss the plot.”

The room froze.

Others compared it to hearing Taleb for the first time.

And that sometimes, in the age of machines, the most human thing is to *say no to the model*.

Leave a Reply

Your email address will not be published. Required fields are marked *