Anatomy of a conversation

LLMs for Data Analysis in R

R/Medicine 2026

Anatomy of a Conversation

We can do this from R!

library(ellmer)

chat <- chat("anthropic")

chat$chat("Tell me a quick fact about sheep.")

library(ellmer)

chat <- chat("anthropic")

chat$chat("Tell me a quick fact about sheep.")
#> Sheep have rectangular pupils that give them 
#> a nearly 360-degree field of vision, 
#> allowing them to see predators approaching 
#> from almost any direction without turning 
#> their heads.

How does this work?

Most LLMs are accessible through HTTP APIs

library(ellmer)

library(ellmer)

chat <- chat("anthropic")

library(ellmer)

chat <- chat("anthropic")

chat$chat("Tell me a quick fact about sheep.")

library(ellmer)

chat <- chat("anthropic")

chat$chat("Tell me a quick fact about sheep.")
#> Sheep have rectangular pupils that give them 
#> a nearly 360-degree field of vision, 
#> allowing them to see predators approaching 
#> from almost any direction without turning 
#> their heads.

chat
<Chat Anthropic/claude-sonnet-4-5 turns=2 input=15 output=37 cost=$0.00>
── user ─────────────────────────────────────────────────────────────────────────────────
Tell me a quick fact about sheep.
── assistant [input=15 output=37 cost=$0.00] ────────────────────────────────────────────
Sheep have rectangular pupils that give them a nearly 360-degree field of vision, allowing them to see predators approaching from almost any direction without turning their heads.

Messages have roles.

Message roles

Role Description
system_prompt Instructions from the developer (i.e., you)
to set the behavior of the assistant
user Messages from the person interacting
with the assistant
assistant The AI model’s responses to the user

library(ellmer)

chat <- chat(
  "anthropic",
  system_prompt = "Always answer in haikus."
)

chat$chat("What is chirality?")

library(ellmer)

chat <- chat(
  "anthropic",
  system_prompt = "Always answer in haikus."
)

chat$chat("What is chirality?")
Molecules can twist,
Left hand, right hand—mirror forms,
Same but different.

chat
<Chat Anthropic/claude-sonnet-4-5 turns=3 input=17 output=23 cost=$0.00>
── system ───────────────────────────────────────────────────────────────────────────────
Always answer in haikus.
── user ─────────────────────────────────────────────────────────────────────────────────
what is chirality
── assistant [input=17 output=23 cost=$0.00] ────────────────────────────────────────────
Molecules can twist,
Left hand, right hand—mirror forms,
Same but different.

Your Turn 02_conversation

  1. Set up a chat with a system prompt instructing the model to answer briefly.

  2. Ask: What ellmer function tells me what Anthropic models are available?

  3. Ask: What about OpenAI models?

  4. Create a new chat with no system prompt and ask the second question again.

  5. How do the answers to 3 and 4 differ? Think about both the content and the style.

05:00

Demo: clearbot

👩‍💻 _demos/01_clearbot/app.py

Is this actually a conversation?

LLMs are stateless

  • The LLM doesn’t remember anything between requests

  • You have to send the entire conversation history with every message

  • The LLM reconstructs the “conversation” from what you send

How LLMs work (briefly)

How do LLMs understand?

If you read everything
ever written…

  • Books and stories

  • Websites and articles

  • Poems and jokes

  • Questions and answers


…then you could…

  • Answer questions
  • Write stories
  • Tell jokes
  • Explain things
  • Translate into any language

How do LLMs respond?

LLMs think in tokens

  • Fundamental units of information for LLMs
  • Words, parts of words, or individual characters
    • “hello” → 1 token
    • “unconventional” → 3 tokens: un|con|ventional
  • Important for:
    • Model input/output limits
    • API pricing is usually by token
  • Not just words, but images can be tokenized too

Demo:
token-possibilities

👩‍💻 _demos/02_token-possibilities/app.R

How to think about LLMs

Think Empirically, Not Theoretically

  • It’s okay to (mostly) treat LLMs as black boxes.

  • Just try it! When wondering if an LLM can do something,
    experiment rather than theorize

  • You might think they could not possibly do things
    that they clearly can do today

  • And you might think surely they can do something
    that it turns out they’re terrible at

LLMs are jagged

Embrace the Experimental Process

What if I want to keep chatting back-and-forth?

ellmer can do that, too!

Console Browser
ellmer live_console(chat) live_browser(chat)

Demo:
live

👩‍💻 _demos/03_live/03_live.R