Generate docstrings for your code using LLMs
patchwork GenerateDocstring
by default you will need to provide the openai_api_key
and the github_api_key
you can pass them as arguments:
patchwork GenerateDocstring openai_api_key=<Your_API_KEY> github_api_key=<Your_GH_Token>
base_path
(default is the current directory), recursively traverse the directory to extract methods from the source code files. It will then use them to create a prompt to be sent to gpt-3.5-turbo
to update the source code with the docstrings. You can check the default prompt template. The udpated files are then committed to the repository under a new branch and finally a pull request is created for the user to review and merge the changes.
openai_api_key
option.
You can also use llama.cpp to run inference on CPU locally. Just install the llama-cpp-python package and run their OpenAI compatible web server as described here with the command:
python3 -m llama_cpp.server --hf_model_repo_id TheBloke/deepseek-coder-6.7B-instruct-GGUF --model 'deepseek-coder-6.7b-instruct.Q4_K_M.gguf' --chat_format chatml
Once the local server is running you can set:
branch_prefix
, or disabling the creation of new branches with disable_branch
(commits will be made on the current branch). You can also disable PR creation with disable_pr
or force push commits to existing PR with force_pr_creation
.
"id": "generate_docstring"
. Note the use of variable {{affectedCode}}
. During the GenerateDocstring patchflow the actual value of this variable is filled in during the execution. The expected output response is complete content of the file with the docstrings.