2026-04-12 1952 llmit develop & thought about skill

To using LLM in terminal to generate git commit, I write such a small script called llmit to do this job:
https://codeberg.org/HYJING/llmit

After writing it, I start to consider whether it is a kind of skill or not. Both of them are integration of AI and executable. The key difference is skill let LLM to use executable, but my llmit let executable to use LLM, which implies the different trust mechanism of them. From my perspective, believing in the output of LLM is stupid. So skill is not reliable and a piece of sh_t. Workflows based on skills are power of sh_t.

The logic of skill for embedding LLM into workflow should be reversed totally: Consider LLM as "wet components" in the system but not the pillar. Everything comes out from LLM should not affect the logic part, which means they can only exist at the end of logic chain or using as input if and only if it is carefully parsed by strict rules.

License & Copyright

cc_badge.png
This article is licensed under the CC BY-NC-SA 4.0 license. Embedded code snippets are released under the GPLv3 license.
If you share or adapt this material, please provide a link back to this original page. See the Full License Policy for more details.