Show HN: Prompting LLMs in Bash scripts elijahpotter.dev 62 points by chilipepperhott 4 days ago https://github.com/elijah-potter/ofc
wunderwuzzi23 13 hours ago Beware of ANSI escape codes where the LLM might hijack your terminal, aka Terminal DiLLMa.https://embracethered.com/blog/posts/2024/terminal-dillmas-p... thephyber 10 hours ago Are there any projects to sanitize the output of LLMs before it is injected into Bash scripts or other source code?I get the feeling this will start to break into the OWASP Top 10 in the next few years… jmholla 9 hours ago While on the topic, does anybody have a good utility to sanitize things? I'm imagining something I can pipe to: xclip -selection clipboard -o | sanitize I've been meaning to throw something together myself, but I worry I'd miss something.
thephyber 10 hours ago Are there any projects to sanitize the output of LLMs before it is injected into Bash scripts or other source code?I get the feeling this will start to break into the OWASP Top 10 in the next few years… jmholla 9 hours ago While on the topic, does anybody have a good utility to sanitize things? I'm imagining something I can pipe to: xclip -selection clipboard -o | sanitize I've been meaning to throw something together myself, but I worry I'd miss something.
jmholla 9 hours ago While on the topic, does anybody have a good utility to sanitize things? I'm imagining something I can pipe to: xclip -selection clipboard -o | sanitize I've been meaning to throw something together myself, but I worry I'd miss something.
TheDong 9 hours ago I feel like the incumbent for running llm prompts, including locally, on the cli is llm: https://github.com/simonw/llm?tab=readme-ov-file#installing-...How does this compare?
zoobab 3 hours ago Did a similar curl script to ask questions to Llama3 hosted at Duckduckgo:https://github.com/zoobab/curlduck
Beware of ANSI escape codes where the LLM might hijack your terminal, aka Terminal DiLLMa.
https://embracethered.com/blog/posts/2024/terminal-dillmas-p...
Are there any projects to sanitize the output of LLMs before it is injected into Bash scripts or other source code?
I get the feeling this will start to break into the OWASP Top 10 in the next few years…
While on the topic, does anybody have a good utility to sanitize things? I'm imagining something I can pipe to:
I've been meaning to throw something together myself, but I worry I'd miss something.I feel like the incumbent for running llm prompts, including locally, on the cli is llm: https://github.com/simonw/llm?tab=readme-ov-file#installing-...
How does this compare?
Did a similar curl script to ask questions to Llama3 hosted at Duckduckgo:
https://github.com/zoobab/curlduck