IMMENSITY OF THE SEA
Prompting, Plain and Simple: The Lowest Rung of the AI Tool Fit Map
Most teams don’t need “agents” yet. They just need prompting - the everyday way small businesses turn AI into leverage - for fast drafts, sharper thinking, and cleaner data. This piece defines prompting agents, where they fit (personal, functional, team), and what they’re good and bad at.
Janna Szangolies
11 Sept 2025
Image made in Midjourney.
Started From the Bottom
Everyone says “AI.” Most people are just prompting. That’s okay. Prompting is the lowest friction path to value. No integrations. No procurement. No new system to rewire the team. You sit down. You ask. You get an answer.
The trick is to do it with purpose.
This article helps you define that purpose. It places prompting at the bottom of the AI Tool Fit Map. It shows what prompting can do, what it cannot, and how to use it without breaking trust, budgets, brains, and your business.
The Definition of “Prompting Agents”
We first need to anchor things to the terms GPT and prompting.
Defining GPT’s
“GPT” has become a sort of household name, but few actually know what it stands for.
It’s Generative Pre-Trained Transformer:
Generative: creates assets — text, images, videos, audio, code.
Pre-Trained: learned from a huge dataset, not from live real-time information.
Transformer: a neural network architecture that processes sequences of data, finding patterns and signals.
None of this makes it a facts-database, or a true reasoning engine.Even though you have great conversations and feel like Chat is your friend, it doesn’t have real understanding, verification, or real-time awareness.
(And if you’ve spent any time on Twitter recently, like me you probably think most humans lack these qualities too — but that’s beside the point).
Defining Prompting
Prompting is direct instruction to a language model in a chat interface to produce, transform, analyse, or reason about text, numbers, or structures.
A prompting agent is a simple, single-role assistant you can engage with on-demand. It has no long-run autonomy, limited memory, and no control over your systems. Think: a sharp intern sitting next to you with a good ear and a short attention span.
What a prompting agent is not: a multi-app orchestrator, a background process, or a bot that acts without you. If your workflow needs tools, triggers, or guarantees, you’re beyond prompting and moving up the AI Tool Fit chain.
It can be useful to think of a split:
Chatbot: an AI that answers questions or generates assets.
Copilot: an AI that lives within another tool and assists you in completing tasks.
Agent: an AI that works on your behalf on its own.
This article is concerned with the chatbot split.
Why call it a prompting “agent” at all then? Because the mental model helps. You’re delegating a role, not typing magic words. You define purpose, inputs, format, and guardrails. That lens stops the “let’s see what happens” habit and pushes toward repeatable outcomes.
Prompting agents really are the gateway drug of AI.
Chatbot Adoption in 2025
What does the data tell us about prompt adoption? From the MIT State of AI in Business 2025 report:
SMBs & Startups: 91% of Millennial entrepreneurs and 87% of Gen X/Boomers have incorporated at least some AI into their business. First-time founders are even more likely (45%) to have significantly adopted AI tools.
Enterprise: Adoption is high, but transformation is rare. Over 80% of organisations have piloted ChatGPT/Copilot-style tools, but only 5% of custom enterprise AI tools make it into full production. Most large firms get stuck — pilots stall, workflows don’t adapt, and ROI stays flat.
Shadow AI: Across 90%+ of companies, employees are already using personal ChatGPT or Claude accounts (“shadow AI”), often more effectively than official enterprise deployments.
The takeaway: prompting is the real front line. Startups run with it; enterprises struggle to scale it. For SMBs, this is an edge: you can move faster while bigger firms are still tangled in pilots.
Prompting Modes
Shadow AI highlights a real risk: personal prompting can expose the business. It’s important to understand the four levels of prompting tools:
Public GPT Prompting (Free, Plus)
Data settings vary by account: By default, chats can be used for model training unless a user disables the “Improve the model” setting.
Security: Personal experimentation is fine. But with settings on, sensitive business information could be used to train future models.
ChatGPT Team (Business-level offering)
Positioned between Personal and Enterprise. It offers shared workspace, admin controls, connectors, single sign-on (SSO), multifactor authentication
Security: Business plan users have more control over data, privacy, and retention than Plus users—even if not as locked down as Enterprise-level guarantees.
Enterprise GPT Prompting (ChatGPT Enterprise)
Robust, contractually backed privacy. Business data (inputs and outputs) are never used for model training by default .
Security: Strong encryption (AES-256 at rest, TLS in transit), admin tools like SAML SSO, role-based access, provisioning, and audit logs included
Custom GPTs (on Business or Enterprise)
Customized assistants built atop your own prompt guidance and reference documents. These inherit the same privacy guarantees as Business or Enterprise workspaces.
Security: There are emerging risks from “instruction leaking”—some poorly protected Custom GPTs can unintentionally expose their internal logic through crafted prompts. It helps to know what you are doing.
Prompting Categories & Built-In Options (Cross-Model)
Not all prompting is the same. Vendors now expose built-in modes that map to common categories.
For OpenAI, note that Advanced Data Analysis (ADA) is no longer a separate model toggle — its file and code analysis powers now live directly inside GPT-4o. And with GPT-5 arriving, GPT-4o remains the stable workhorse for prompting, reasoning, and multimodal use, while GPT-5 pushes fidelity further.
Category | Use Case | Best Fit | Built-In Options |
---|---|---|---|
Reasoning & Problem Solving | Breaking down problems, exploring scenarios, counter-arguments | GPT-4o (and GPT-5 for advanced fidelity), Claude Opus, Gemini Pro | Model selector in ChatGPT, Claude’s long context |
Search & Retrieval | Summarising PDFs, extracting policy points, document QA | Claude (large context), GPT-4o with built-in analysis, Gemini for Docs/Sheets | File uploads, Workspace side panel |
Deep Research & Synthesis | Combining sources into briefs, reports, knowledge hubs | Claude Opus, ChatGPT Enterprise (4o/5 with file uploads), Gemini Pro | Custom GPTs with domain docs |
Content Generation | Drafting emails, posts, blogs, proposals | GPT-4o, GPT-5, Claude Sonnet, Gemini Flash | Output formatting (Markdown, JSON) |
Image Generation | Visuals, concept art, UI mockups | Midjourney, DALL·E 3, Stable Diffusion, Nano-Banana (Google AI Studio) | Midjourney params, DALL·E edit mode, Nano-Banana editing |
Data Wrangling & Transformation | Cleaning, reformatting, JSON | GPT-4o/5 with analysis, Claude, Gemini for Sheets | Python execution, CSV parsing |
Simulation & Role Play | Stakeholder roleplay, customer interviews, coaching | Claude, ChatGPT Custom GPTs | Persona setup |
Coding & Technical Support | Writing, reviewing, debugging code | GPT-4o/5, Claude, GitHub Copilot | Copilot inline vs chat, executable environments |
Note, tools and models change rapidly. This list could be outdated in days instead of months, which is why building and orientating in this space is so difficult.
Task Scope: Where Prompting Fits
Prompting is domain-specific. For business owners and ops leads, we break tasks into three categories:
Scope | Examples | Win | Risk |
---|---|---|---|
Personal | Inbox triage (draft replies, rewrite tone, summarise threads); Deep reading (report → memo); Thinking partner (explore options, list risks); Skill practice (mock interviews, spaced recall) | Speed, low coordination cost | Over-trust — treating drafts as final |
Functional | Marketing (brief outlines, headline variants, post repurposing); Sales (account research, call summaries, follow-ups); Finance/Ops (normalise CSVs, check policy drafts, generate checklists) | Faster first pass, better coverage | Domain drift — confident but shaky facts |
Team | Meeting loops (agenda → notes → actions → recap); Design reviews (condense comments into decisions); R&D (weekly digest from notes) | Reduced meeting sprawl, standardised outputs | Provenance & permissions — sensitive data risks without controls |
What Prompting is Good At
Prompting shines when the work is light, fast, and reversible. Think of it as an intern who is always awake, quick on the keyboard, and never offended if you throw out their draft.
Speed to first draft: Three versions of a proposal before lunch. Momentum over polish.
Compression: Turn sprawling notes into a one-pager. Great for digestion and handovers.
Reframing: Formal → friendly, policy → FAQ, client → internal team.
List-making & decomposition: Break foggy goals into steps and roles.
Checks & critiques: Built-in contrarian: what’s missing, what fails, what to counter.
Light data wrangling: Clean a contact list, dedupe names, draft a regex. Good enough before the real tools.
Scaffolds: Skeleton templates for briefs, specs, agendas.
Computers Can Now See
One of the biggest advantages of prompting today is that computers can now see. For the first time, chatbots don’t just process text — they can literally read and understand images.
Upload a photo of a product, and they can describe its features.
Drop in a chart, and they’ll summarise the key insights.
Share a screenshot of an error, and they can troubleshoot.
This is a leap forward from the text-only world of early prompting - and I'm often surprised more people aren't screaming this from the rafters. It's a bit like sitting in an airplane, 10,000 feet up, casually eating peanuts while hurtling through space — completely extraordinary, but we act like it’s normal.
In practice, it means:
Faster analysis (no manual data entry).
Richer conversations (combine text + images in the same thread).
New use cases (design reviews, audits, inventory checks, visual QA).
For SMBs, this is a game-changer: instead of writing long explanations, you can simply show the AI what you mean.
What Prompting is Bad At
Bad prompts have sharp edges. They move fast, but not always straight. Without care, you inherit their risks.
System guarantees: AI chat tools don’t promise reliability the way business software does. There’s no guarantee they’ll be available at all times (uptime), no built-in way to track who did what (audit trail), and no promise you’ll always get the same answer to the same question (determinism).
Truth under pressure: Models hallucinate: inventing facts, citations, or details with confidence. Example: a fake report, misquoted client brief, or an address that looks real but isn’t. Fabrication can be presented as fact.
Lacking long memory: Context windows are finite. Yesterday’s insight isn’t guaranteed today. You can lose previous threads without backing up to a document.
Policy & privacy: Paste the wrong snippet into a public model and it leaks.
Up-to-date limits: Models often work from old data. An answer referencing 2024 guidelines might be wrong for 2025 tool versions creating loops.
Prompting is powerful when the stakes are low and errors are reversible. If you need guarantees, safe handling, or repeatability, move up the Tool Fit Map.
The Latest: Prompt Engineering vs Context Engineering
As I’ve stated at the start of this series, we’re still in the “pre-React” phase of AI. One way teams have been boosting fidelity is through prompt and context engineering.
Prompt engineering is sharpening the question: roles, audiences, formats, constraints. It makes outputs closer to intent and repeatable.
Context engineering is feeding the model input material: documents, examples, policies, memory management. In practice: show, don’t just tell. Provide golden examples — actual samples the model can copy and adapt.
For SMBs at the prompting layer, prompt engineering is usually enough. As teams scale, context becomes a lever.
JSON Prompting
A shift sits between the two: JSON prompting. Instead of free text, smart prompters structure outputs into tables, schemas, or JSON objects. Humans read paragraphs. Systems need fields.
JSON (JavaScript Object Notation) is a way of writing down information so both humans and computers can understand it.
Think of it like a set of labelled boxes: each label (key) has something inside (value).
JSON Structuring Example
Take a look at the following structure.
This says: Name = Alice, Role = Designer, Team = Marketing.
Why it matters for AI: JSON is neat, predictable, and structured — which makes it easier to plug AI outputs straight into tools, dashboards, or databases instead of messy paragraphs.
Prompt Engineering Using JSON
Let's say you wanted to turn an image of your logo into a glassy object.

You might look at using a simple prompt like this:
“Make the object look kind of glassy and cool, with some bubbles. Put it on a white background and make it realistic.”
It's not terrible, but the prompt is kind of vague (there is no consistent interpretation), no constraints on lighting, composition or material properties, with the result being that any future uses of the prompt could have wildly differing results. We lack consistency.

Now let's look at a structured JSON prompt that is designed for machine readability.

And if you apply this to new assets.

When you think in machine, and use structured JSON for prompting, you can create higher fidelity at greater speeds. That the essence of prompt engineering.
Prompt Engineering Practical Moves
Always pass context (policy snippets, data dictionaries).
Show, don’t just tell (use golden examples).
Name role + audience.
Constrain with word counts, tables, checklists.
Chain lightly (outline → draft → critique).
Keep a living prompt library.
Checklist: When to Choose Prompting (and When Not To)
These simple checklist give you a guide.
Choose prompting when:
You need results today without approval.
The work is low-risk and reversible.
You want to learn the shape of a solution.
The output lives in docs, notes, drafts.
Do not choose prompting when:
Compliance, audit, or guarantees are required.
Workflows must touch systems of record.
Data is sensitive and policies forbid external use.
You need speed at team scale, every day.
Escalation path:
Start with prompting.
Turn stable prompts into checklists.
If handoffs hurt, graduate to no-code workflows (something Immensity of the Sea is built to deliver for businesses)
If rules/audits matter, adopt process-first automation.
If decisions should be AI-led, explore orchestration tools.
What We’re Seeing in SMB Adoption
At Immensity of the Sea, we work across a lot of industries with business owners of all type. This has let us see some interesting patterns:
Prompting is the on-ramp. It’s fairly easy to adopt, and normally evolves from shadow AI usage. A few become “AI helpers” for the rest. It’s important to find and empower these people.
Founders should experiment. Best entry: get a business/team account, spend 1–2 hours a week prompting and experimenting.
Wider use follows trust. Once prompts are documented, others copy and adapt. Hallucinations drop. Outputs get stronger.
Bottlenecks appear at handoff. Copy-paste into systems of record causes errors. Looking at no-code workflows can help.
Winners standardise. Good prompts become playbooks. Governance becomes normal.
Implication: prompting scales locally first (within a person or small team). Cross-team scale needs patterns and ownership.
Next Steps
At Immensity of the Sea, we help SMBs move from experimentation to governance. That means:
Designing a prompting strategy aligned to your workflows.
Setting up the right tools (Business/Team GPT, Custom GPTs).
Creating governance and prompting guidelines inside our Momentum Mapper framework.
Building a prompt library your team can share, adapt, and trust.
We help teams graduate from prompting into no-code workflows, then into structured automation as their needs mature. Prompting is the gateway. With the right setup, you can scale safely and move up the AI Tool Fit Map when ready.
Common Questions
Be the first to know about the latest resources
No spam, unsubscribe anytime.