Skip to content

Commit 6057d17

Browse files
author
Automated
committed
Populate docs/ from 0.7.1
1 parent 83592f1 commit 6057d17

10 files changed

+37
-136
lines changed

docs/changelog.md

+5
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,10 @@
11
# Changelog
22

3+
(v0_7_1)=
4+
## 0.7.1 (2023-08-19)
5+
6+
- Fixed a bug where some users would see an `AlterError: No such column: log.id` error when attempting to use this tool, after upgrading to the latest [sqlite-utils 3.35 release](https://sqlite-utils.datasette.io/en/stable/changelog.html#v3-35). [#162](https://github.com/simonw/llm/issues/162)
7+
38
(v0_7)=
49
## 0.7 (2023-08-12)
510

docs/help.md

+12-16
Original file line numberDiff line numberDiff line change
@@ -183,16 +183,12 @@ Usage: llm logs list [OPTIONS]
183183
Show recent logged prompts and their responses
184184
185185
Options:
186-
-n, --count INTEGER Number of entries to show - defaults to 3, use 0
187-
for all
188-
-p, --path FILE Path to log database
189-
-m, --model TEXT Filter by model or model alias
190-
-q, --query TEXT Search for logs matching this string
191-
-t, --truncate Truncate long strings in output
192-
-c, --current Show logs from the current conversation
193-
--cid, --conversation TEXT Show logs for this conversation ID
194-
--json Output logs as JSON
195-
--help Show this message and exit.
186+
-n, --count INTEGER Number of entries to show - 0 for all
187+
-p, --path FILE Path to log database
188+
-m, --model TEXT Filter by model or model alias
189+
-q, --query TEXT Search for logs matching this string
190+
-t, --truncate Truncate long strings in output
191+
--help Show this message and exit.
196192
```
197193
### llm models --help
198194
```
@@ -204,8 +200,8 @@ Options:
204200
--help Show this message and exit.
205201
206202
Commands:
207-
list* List available models
208203
default Show or set the default model
204+
list List available models
209205
```
210206
#### llm models list --help
211207
```
@@ -236,10 +232,10 @@ Options:
236232
--help Show this message and exit.
237233
238234
Commands:
239-
list* List available prompt templates
240-
edit Edit the specified prompt template using the default $EDITOR
241-
path Output the path to the templates directory
242-
show Show the specified prompt template
235+
edit Edit the specified prompt template using the default $EDITOR
236+
list List available prompt templates
237+
path Output the path to the templates directory
238+
show Show the specified prompt template
243239
```
244240
#### llm templates list --help
245241
```
@@ -287,7 +283,7 @@ Options:
287283
--help Show this message and exit.
288284
289285
Commands:
290-
list* List current aliases
286+
list List current aliases
291287
path Output the path to the aliases.json file
292288
remove Remove an alias
293289
set Set an alias for a model

docs/logging.md

+7-33
Original file line numberDiff line numberDiff line change
@@ -54,13 +54,7 @@ You can view the logs using the `llm logs` command:
5454
```bash
5555
llm logs
5656
```
57-
This will output the three most recent logged items in Markdown format
58-
59-
Add `--json` to get the log messages in JSON instead:
60-
61-
```bash
62-
llm logs --json
63-
```
57+
This will output the three most recent logged items as a JSON array of objects.
6458

6559
Add `-n 10` to see the ten most recent items:
6660
```bash
@@ -70,39 +64,19 @@ Or `-n 0` to see everything that has ever been logged:
7064
```bash
7165
llm logs -n 0
7266
```
73-
You can truncate the display of the prompts and responses using the `-t/--truncate` option. This can help make the JSON output more readable:
74-
```bash
75-
llm logs -n 5 -t --json
76-
```
77-
### Logs for a conversation
78-
79-
To view the logs for the most recent {ref}`conversation <conversation>` you have had with a model, use `-c`:
80-
81-
```bash
82-
llm logs -c
83-
```
84-
To see logs for a specific conversation based on its ID, use `--cid ID` or `--conversation ID`:
85-
86-
```bash
87-
llm logs --cid 01h82n0q9crqtnzmf13gkyxawg
88-
```
89-
90-
### Searching the logs
91-
92-
You can search the logs for a search term in the `prompt` or the `response` columns.
67+
You can search the logs for a search term in the `prompt` or the `response` columns:
9368
```bash
9469
llm logs -q 'cheesecake'
9570
```
96-
The most relevant terms will be shown at the bottom of the output.
97-
98-
### Filtering by model
99-
10071
You can filter to logs just for a specific model (or model alias) using `-m/--model`:
10172
```bash
10273
llm logs -m chatgpt
10374
```
104-
105-
### Browsing logs using Datasette
75+
You can truncate the display of the prompts and responses using the `-t/--truncate` option:
76+
```bash
77+
llm logs -n 5 -t
78+
```
79+
This is useful for finding a conversation that you would like to continue.
10680

10781
You can also use [Datasette](https://datasette.io/) to browse your logs like this:
10882

docs/other-models.md

+4-18
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ To install **[llm-gpt4all](https://github.com/simonw/llm-gpt4all)**, providing 1
1212
```bash
1313
llm install llm-gpt4all
1414
```
15-
Run `llm models` to see the expanded list of available models.
15+
Run `llm models list` to see the expanded list of available models.
1616

1717
To run a prompt through one of the models from GPT4All specify it using `-m/--model`:
1818
```bash
@@ -52,9 +52,9 @@ With this configuration in place, the following command should run a prompt agai
5252
```bash
5353
llm -m 0613 'What is the capital of France?'
5454
```
55-
Run `llm models` to confirm that the new model is now available:
55+
Run `llm models list` to confirm that the new model is now available:
5656
```bash
57-
llm models
57+
llm models list
5858
```
5959
Example output:
6060
```
@@ -87,7 +87,7 @@ If the `api_base` is set, the existing configured `openai` API key will not be s
8787

8888
You can set `api_key_name` to the name of a key stored using the {ref}`api-keys` feature.
8989

90-
Having configured the model like this, run `llm models` to check that it installed correctly. You can then run prompts against it like so:
90+
Having configured the model like this, run `llm models list` to check that it installed correctly. You can then run prompts against it like so:
9191

9292
```bash
9393
llm -m orca-openai-compat 'What is the capital of France?'
@@ -96,17 +96,3 @@ And confirm they were logged correctly with:
9696
```bash
9797
llm logs -n 1
9898
```
99-
100-
### Extra HTTP headers
101-
102-
Some providers such as [openrouter.ai](https://openrouter.ai/docs) may require the setting of additional HTTP headers. You can set those using the `headers:` key like this:
103-
104-
```yaml
105-
- model_id: claude
106-
model_name: anthropic/claude-2
107-
api_base: "https://openrouter.ai/api/v1"
108-
api_key_name: openrouter
109-
headers:
110-
HTTP-Referer: "https://llm.datasette.io/"
111-
X-Title: LLM
112-
```

docs/plugins/installing-plugins.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,11 @@ The `-y` flag skips asking for confirmation.
1717

1818
You can see additional models that have been added by plugins by running:
1919
```bash
20-
llm models
20+
llm models list
2121
```
2222
Or add `--options` to include details of the options available for each model:
2323
```bash
24-
llm models --options
24+
llm models list --options
2525
```
2626
To run a prompt against a newly installed model, pass its name as the `-m/--model` option:
2727
```bash

docs/plugins/tutorial-model-plugin.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -346,7 +346,7 @@ Let's add extra validation rules to our options. Length must be at least 2. Dura
346346

347347
The `Options` class uses [Pydantic 2](https://pydantic.org/), which can support all sorts of advanced validation rules.
348348

349-
We can also add inline documentation, which can then be displayed by the `llm models --options` command.
349+
We can also add inline documentation, which can then be displayed by the `llm models list --options` command.
350350

351351
Add these imports to the top of `llm_markov.py`:
352352
```python

docs/python-api.md

+1-32
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,10 @@ The `llm.get_model()` function accepts model names or aliases - so `chatgpt` wou
2222
Run this command to see a list of available models and their aliases:
2323

2424
```bash
25-
llm models
25+
llm models list
2626
```
2727
If you have set a `OPENAI_API_KEY` environment variable you can omit the `model.key = ` line.
2828

29-
Calling `llm.get_model()` with an invalid model name will raise a `llm.UnknownModelError` exception.
30-
3129
(python-api-system-prompts)=
3230

3331
### System prompts
@@ -96,32 +94,3 @@ print(response2.text())
9694
You will get back five fun facts about skunks.
9795

9896
Access `conversation.responses` for a list of all of the responses that have so far been returned during the conversation.
99-
100-
## Other functions
101-
102-
The `llm` top level package includes some useful utility functions.
103-
104-
### set_alias(alias, model_id)
105-
106-
The `llm.set_alias()` function can be used to define a new alias:
107-
108-
```python
109-
import llm
110-
111-
llm.set_alias("turbo", "gpt-3.5-turbo")
112-
```
113-
The second argument can be a model identifier or another alias, in which case that alias will be resolved.
114-
115-
If the `aliases.json` file does not exist or contains invalid JSON it will be created or overwritten.
116-
117-
### remove_alias(alias)
118-
119-
Removes the alias with the given name from the `aliases.json` file.
120-
121-
Raises `KeyError` if the alias does not exist.
122-
123-
```python
124-
import llm
125-
126-
llm.remove_alias("turbo")
127-
```

docs/requirements.txt

+1-2
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
1-
sphinx==7.2.2
2-
furo==2023.8.17
1+
furo==2023.7.26
32
sphinx-autobuild
43
sphinx-copybutton
54
myst-parser

docs/templates.md

+1-11
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ You can also save default parameters:
2626
llm --system 'Summarize this text in the voice of $voice' \
2727
--model gpt-4 -p voice GlaDOS --save summarize
2828
```
29+
2930
## Using a template
3031

3132
You can execute a named template using the `-t/--template` option:
@@ -39,17 +40,6 @@ This can be combined with the `-m` option to specify a different model:
3940
curl -s https://llm.datasette.io/en/latest/ | \
4041
llm -t summarize -m gpt-3.5-turbo-16k
4142
```
42-
## Listing available templates
43-
44-
This command lists all available templates:
45-
```bash
46-
llm templates
47-
```
48-
The output looks something like this:
49-
```
50-
cmd : system: reply with macos terminal commands only, no extra information
51-
glados : system: You are GlaDOS prompt: Summarize this: $input
52-
```
5343

5444
## Templates as YAML files
5545

docs/usage.md

+3-21
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,6 @@ Some models support options. You can pass these using `-o/--option name value` -
3030
llm 'Ten names for cheesecakes' -o temperature 1.5
3131
```
3232

33-
(conversation)=
3433
## Continuing a conversation
3534

3635
By default, the tool will start a new conversation each time you run it.
@@ -69,29 +68,12 @@ This is useful for piping content to standard input, for example:
6968
curl -s 'https://simonwillison.net/2023/May/15/per-interpreter-gils/' | \
7069
llm -s 'Suggest topics for this post as a JSON array'
7170
```
72-
Different models support system prompts in different ways.
73-
74-
The OpenAI models are particularly good at using system prompts as instructions for how they should process additional input sent as part of the regular prompt.
75-
76-
Other models might use system prompts change the default voice and attitude of the model.
77-
78-
System prompts can be saved as {ref}`templates <prompt-templates>` to create reusable tools. For example, you can create a template called `pytest` like this:
79-
80-
```bash
81-
llm -s 'write pytest tests for this code' --save pytest
82-
```
83-
And then use the new template like this:
84-
```bash
85-
cat llm/utils.py | llm -t pytest
86-
```
87-
See {ref}`prompt templates <prompt-templates>` for more.
88-
8971
## Listing available models
9072

91-
The `llm models` command lists every model that can be used with LLM, along with any aliases:
73+
The `llm models list` command lists every model that can be used with LLM, along with any aliases:
9274

9375
```bash
94-
llm models
76+
llm models list
9577
```
9678
Example output:
9779
```
@@ -103,7 +85,7 @@ PaLM 2: chat-bison-001 (aliases: palm, palm2)
10385
```
10486
Add `--options` to also see documentation for the options supported by each model:
10587
```bash
106-
llm models --options
88+
llm models list --options
10789
```
10890
Output:
10991
<!-- [[[cog

0 commit comments

Comments
 (0)