Resolve issue (or bug) automatically using LLMs
patchwork ResolveIssue
by default you will need to provide the github_api_key
and the issue_url
:
patchwork ResolveIssue github_api_key=<Your_GH_Token> issue_url=https://github.com/url/to/issue
issue_url
and indentify the files in the repository that may need to be updated to resolve the issue. The list of files are then added to the original issue as a new comment for the user to review.
fix_issue
flag and set it as follows:
openai_api_key
option.
You can also use llama.cpp to run inference on CPU locally. Just install the llama-cpp-python package and run their OpenAI compatible web server as described here with the command:
python3 -m llama_cpp.server --hf_model_repo_id TheBloke/deepseek-coder-6.7B-instruct-GGUF --model 'deepseek-coder-6.7b-instruct.Q4_K_M.gguf' --chat_format chatml
Once the local server is running you can set:
context_size
tokens to pass on to the LLM. You can change the default value by setting:
context_size
doesn’t necessarily lead to better fixes.
branch_prefix
, or disabling the creation of new branches with disable_branch
(commits will be made on the current branch). You can also disable PR creation with disable_pr
or force push commits to existing PR with force_pr_creation
.
{{messageText}}
and {{affectedCode}}
. They are generated by the steps within the ResolveIssue patchflow and replaced by the actual values during the execution.