AI-Toolkit AUTO_DOC.md

AI-Toolkit AUTO_DOC.md

# AI-Toolkit Documentation
Generated on Pi 13. marec 2026, 21:07:34 CET

▄▄▄ ██▓ ▄████▄ ██▓ ██▓
▒████▄ ▓██▒▒██▀ ▀█ ▓██▒ ▓██▒
▒██ ▀█▄ ▒██▒▒▓█ ▄ ▒██░ ▒██▒
░██▄▄▄▄██ ░██░▒▓▓▄ ▄██▒▒██░ ░██░
▓█ ▓██▒░██░▒ ▓███▀ ░░██████▒░██░
▒▒ ▓▒█░░▓ ░ ░▒ ▒ ░░ ▒░▓ ░░▓
▒ ▒▒ ░ ▒ ░ ░ ▒ ░ ░ ▒ ░ ▒ ░
░ ▒ ▒ ░░ ░ ░ ▒ ░
░ ░ ░ ░ ░ ░ ░ ░
>> TERMINAL INTELLIGENCE v. 1.73 <<
## Project Structure
.
├── ai.sh
├── AUTO_DOC.md
├── config
│   ├── ai-toolkit.conf
│   ├── models
│   │   ├── deepcoder:14b.meta
│   │   ├── deepseek-coder-v2:16b.meta
│   │   ├── deepseek-coder:33b.meta
│   │   ├── deepseek-r1:14b.meta
│   │   ├── deepseek-r1:32b.meta
│   │   ├── dolphincoder:15b.meta
│   │   ├── gpt-oss-safeguard:20b.meta
│   │   ├── gpt-oss:20b.meta
│   │   ├── llama3.1:8b-instruct-q8_0.meta
│   │   ├── llama3.1:8b.meta
│   │   ├── qwen2.5-coder:14b.meta
│   │   ├── qwen3-coder:30b.meta
│   │   ├── sqlcoder:15b.meta
│   │   └── starcoder2:15b.meta
│   ├── profiles
│   │   ├── code.conf
│   │   ├── creative.conf
│   │   ├── debug.conf
│   │   └── shell.conf
│   └── prompts
│   ├── analyze_log.sh
│   ├── analyze_project.sh
│   ├── comment_script.sh
│   ├── default.sh
│   ├── shell_assist.sh
│   └── test.sh
├── deploy_build_to_ftp.sh
├── lib
│   ├── api
│   │   ├── build_json.sh
│   │   ├── call_api.sh
│   │   └── ollama_api_map.sh
│   ├── commands
│   │   ├── delete.sh
│   │   ├── download.sh
│   │   ├── help.sh
│   │   ├── ins.sh
│   │   ├── list.sh
│   │   ├── log.sh
│   │   ├── profiles.sh
│   │   ├── prompts.sh
│   │   ├── run.sh
│   │   ├── tools.sh
│   │   ├── update.sh
│   │   └── version.sh
│   ├── core
│   │   ├── build_log_ai.sh
│   │   ├── build_log_tool.sh
│   │   ├── calculate_cost.sh
│   │   ├── cleanup.sh
│   │   ├── get_lang.sh
│   │   ├── get_max_content.sh
│   │   ├── map_to_json.sh
│   │   ├── profile.sh
│   │   ├── prompt.sh
│   │   ├── run_tool.sh
│   │   └── tool_registry.sh
│   ├── rag
│   │   ├── build_context.sh
│   │   ├── project_info.sh
│   │   └── system_info.sh
│   ├── renderer
│   │   ├── colors.sh
│   │   ├── fake_cmatrix.sh
│   │   ├── line_separator.sh
│   │   ├── metrics.sh
│   │   └── posix_markdown_render.sh
│   └── tools
│   ├── tree.sh
│   ├── view_file.sh
│   └── write_file.sh
├── logs
├── utils
│   ├── benchmark.sh
│   ├── generate_documentation.sh
│   ├── generate_full_context.sh
│   ├── netinstall.sh
│   └── tool_call_test.sh
└── VERSION
14 directories, 71 files
## Commands
- **delete**: Delete model from ollama server
- **download**: Download model to ollama server
- **help**: Options Commands Examples
- **ins**: Full inspect model
- **list**: List available models from endpoint
- **log**: Wiev log file in $EDITOR
- **profiles**: View available profiles
- **prompts**: View available prompts
- **run**: Run a command with an explanation or correction
- **tools**: View available tools
- **update**: Update model meta files to include price
- **version**: >> TERMINAL INTELLIGENCE <<
## Profiles
- **code**
- **creative**
- **debug**
- **shell**
## Prompts
- **analyze_log**
- **analyze_project**
- **comment_script**
- **default**
- **shell_assist**
- **test**
## Tools
- **tree**
- **view_file**
- **write_file**
Usage: ai [options] [command] "question"
Options:
-h | --help Show this help message
-v | --version Show version
-ui Show metrics
-fm Full metrics
-d Enable FULL debug mode
-dr Debug RAG without call api
-rag Enable RAG context
-e ENDPOINT Use a specific AI endpoint
-m MODEL Use a specific AI model
-o OUTFILE Use a specific output filename
-c KB Set max context size in KB
-prompt [val] Set prompt
-p [val] Set model profile
-mst [val] Set opt mirostat
-mste [val] Set opt mirostat_eta
-mstt [val] Set opt mirostat_tau
-nctx [val] Set opt num_ctx
-rln [val] Set opt repeat_last_n
-rp [val] Set opt repeat_penalty
-temp [val] Set opt temperature
-seed [val] Set opt seed
-st [val] Set opt stop
-tfsz [val] Set opt tfs_z
-np [val] Set opt num_predict
-topk [val] Set opt top_k
-topp [val] Set opt top_p
-minp [val] Set opt min_p
Commands:
list List available models
profile List available profiles
prompt List available prompts
tools List available tools
download Download model (local server)
delete Delete model (local server)
ins Inspect model full info
update Updates model meta files
run Run your command with assistence
log Show logfile history.json
Examples:
ai -temp 0.1 "question"
ai -m gpt-oss:20b ins
ai run cat hostory.log
ai -p creative -o response.md "question"
ai -c 48 -m llama3.1:8b -o test.md "question"
tail -n 20 history.log | ai -d "Make a report"
ai history.log "Print lines with error"

 

Marek Mihók