I’ve been locked in some heavy coding with Claude Code in VS Code and Google’s new Antigravity. The tools have come a long way in 3 months. (Antigravity keeps running out of its content window, argh! Wrecked my codebase and I had to backtrack!)

Takeaway: Building is so much faster. I have learned so many techniques I did not know were possible. I’m building robust web apps I could not do solo (Damnit, Jim, I’m a designer and researcher, not an expert developer! đź––). When I build by hand, I am so much faster because I take the time to learn from the process/what I make with the tools.

But AI Literacy involves…

  • Knowing how to write a sequence of complex prompts that segment the activity into manageable bits.
  • Parsing the task into stages that all point to an eventual goal. Ya gotta have in mind what you are trying to accomplish
  • Building off of surprises: when an unexpected feature emerges when using AI, evaluate it and then build on it… sometimes changing direction
  • Engaged critical thinking: my brain is always on, thinking systemically and systematically… evaluating results and adjusting with the whole picture in mind

Experience is invaluable. I can cut off so many dead ends when using AI because I have caused them in my code in the past, and I know when a turn will lead to a cul-de-sac. The more experience you have, the better the result (this is an issue for early career folx, the make-it-take-it syndrome. 🏀) Then again, folks who are self-initiated and try stuff, break it, make it, read it, and explore will get experience. I think we need to mentor students to choose active exploration and experimentation more, instead of passive consumption. This has been a theme for me for decades.

GenAi is getting better at generating new and useful feature directions rather than wild-goose-chasing all over the place. Then again, maybe I am more AI Literate. A merge perhaps? Am I getting better? Is it getting better? I am going further and can do more. I like that. And the work is stronger. I am more curious and inspired than I used to be. Hope springs eternal.

Thinking is actually more important and active than when reading because the content evolves and morphs. The content I read—as long as I use the content like putty in my hands that can be manipulated and reconfigured—is not an inanimate authority, but a resource that changes me and results in new (creation, experience, knowing, awareness).

Hm. I imagine previous scholars, writers, and creators would rather their work be alive—moving people, changing societies—than a static testament to their prowess. (okay, some folx probably do want their work to just be self-referential). Note: I am ignoring concepts of the self, ego, legacy, ownership, etc. here. These are real and factor into why people create and how they think creations should be used and credited. Folx need to make a living from their creations because in our current society, basic needs are not met if you do not have money or cannot trade your goods/services.

Oh, and the user’s agency is oh-so-important here. Intent (See: Theory of Planned Behavior, Ajzen) determines if AI use is “fix it for me” or “let’s explore ways to apply content for discovery.” Brains that are still developing and folx who are in a hurry to get to the next thing are not gonna spend time to discover. They’ll use AI to crap something out because they value something other than the thing they are doing.

What if students valued what we invite them to do? To learn?

Media Pairing: I wrote this while listening to “Glittering Grandeur” by Paul Mottram. It may have been an influence: https://soundcloud.com/paulmottram/glittering-grandeur

rainbows flowing out of a silicon chip