Generate documentation for your code using LLMs
patchwork GenerateREADME folder_path=/path/to/your/folder
by default you will need to provide the openai_api_key
and the github_api_key
you can pass them as arguments:
patchwork GenerateREADME openai_api_key=<Your_API_KEY> github_api_key=<Your_GH_Token> folder_path=/path/to/your/folder
gpt-3.5-turbo
to generate a README.md file. You can check the default prompt template. The generated README.md file is then committed to the repository under a new branch and finally a pull request is created for the user to review and merge the changes.
openai_api_key
option.
You can also use llama.cpp to run inference on CPU locally. Just install the llama-cpp-python package and run their OpenAI compatible web server as described here with the command:
python3 -m llama_cpp.server --hf_model_repo_id TheBloke/deepseek-coder-6.7B-instruct-GGUF --model 'deepseek-coder-6.7b-instruct.Q4_0.gguf'
Once the local server is running you can set:
filter
option.
suppress_comments
option.
markdown_file_name
option.
branch_prefix
, or disabling the creation of new branches with disable_branch
(commits will be made on the current branch). You can also disable PR creation with disable_pr
or force push commits to existing PR with force_pr_creation
.
{{fullContent}}
. It is generated by the steps within the GenerateREADME patchflow and replaced by the actual value during the execution. The expected output response is complete content of the README.md file with the documentation.