The ResolveIssue patchflow aims to automatically resolve issues (or bugs) in your repository.

How to run?

You can run it as follows:

patchwork ResolveIssue

by default you will need to provide the github_api_key and the issue_url:

patchwork ResolveIssue github_api_key=<Your_GH_Token> issue_url=https://github.com/url/to/issue

What it does?

The ResolveIssue patchflow will embed the entire repository in a vector database (chroma DB) using an embedding model (SentenceTransformers). It will then extract the issue description and comments from the issue_url and indentify the files in the repository that may need to be updated to resolve the issue. The list of files are then added to the original issue as a new comment for the user to review.

Configuration

The following are the default configurations that can be modified by the user to adapt the ResolveIssue patchflow to their needs. All the options can be set both via CLI arguments and and the yaml config file.

Fix issue

By default the patchflow will only identify the files that need to be updated in order to fix the issue. If you want to generate a PR that will automatically fix the bug described in the issue you can use the fix_issue flag and set it as follows:

fix_issue: true

Model

You can choose any LLM API as long as it has an OpenAI API compatible chat completions endpoint. Just update the default values of the following options:

model: gpt-3.5-turbo
client_base_url: https://api.openai.com/v1

E.g. to use Meta’s CodeLlama model from HuggingFace you can set:

client_base_url: https://api-inference.huggingface.co/models/codellama/CodeLlama-70b-Instruct-hf/v1
model: codellama/CodeLlama-70b-Instruct-hf
model_temperature: 0.2
model_top_p: 0.95
model_max_tokens: 2000

and pass your HuggingFace token in the openai_api_key option.

You can also use llama.cpp to run inference on CPU locally. Just install the llama-cpp-python package and run their OpenAI compatible web server as described here with the command:

python3 -m llama_cpp.server --hf_model_repo_id TheBloke/deepseek-coder-6.7B-instruct-GGUF --model 'deepseek-coder-6.7b-instruct.Q4_K_M.gguf' --chat_format chatml

Once the local server is running you can set:

client_base_url: http://localhost:8000/v1
model: TheBloke/deepseek-coder-6.7B-instruct-GGUF
model_temperature: 0.2
model_top_p: 0.95
model_max_tokens: 1000

and use the local model for inference.

Context size

By default we chunk the code to context_size tokens to pass on to the LLM. You can change the default value by setting:

context_size: 1000

in general we have found that a larger context_size doesn’t necessarily lead to better fixes.

Manage PRs

In addition, there are options to let you manage the PRs as you like, by setting a branch_prefix, or disabling the creation of new branches with disable_branch (commits will be made on the current branch). You can also disable PR creation with disable_pr or force push commits to existing PR with force_pr_creation.

branch_prefix: resolve-issue-Fixes
disable_branch: false
disable_pr: false
force_pr_creation: false

Prompt template

You can update the default prompt template. Note the use of variables {{messageText}} and {{affectedCode}}. They are generated by the steps within the ResolveIssue patchflow and replaced by the actual values during the execution.

Examples

Here are some examples with the ResolveIssue patchflow: