I found some Open University courses on cybersecurity, and AI today, that I decided to work through for something to do (and also because learning python might come in handy in my future career maybe. Anything for a competitive edge eh?)
In the course of working on these, it suggested asking microsoft co-pilot about something you know a lot about, to check it’s content for accuracy. So of course, my topic of choice was PDA. The content it generated seems pretty accurate, maybe because not many sources discuss PDA, so it’s limited in what it can draw from – meaning misinformation in the training data might be more limited than it would be for other topics, perhaps.
Over the course of entering prompts, we reached a point of copilot suggesting I could ask it to demonstrate how declarative language would work in a series of roleplays (Note: inputting the word roleplay doesn’t work, as it assumes the word is meant sexually. However, there are wordings that circumvent this and allow you to continue on).
It provides a scenario and asks you to attempt to find declarative wording to avoid being demanding. It then offers either a refinement, or a new scenario to practice with. It’s able to offer workplace, family, and friends scenarios, and possibly others.
I’m not one to advocate for AI usually, as I feel it often represents theft of intellectual and creative property. However, this is a low effort, no cost means of practicing language skills that might make a real difference to the PDAers in your life. I couldn’t pass up passing that on!