Review pull requests using LLMs
patchwork PRReview pr_url=<Link_to_your_PR>
by default you will need to provide the openai_api_key
and the github_api_key
you can pass them as arguments:
patchwork PRReview openai_api_key=<Your_API_KEY> github_api_key=<Your_GH_Token> pr_url=<Link_to_your_PR>
gpt-3.5-turbo
to create a summary. You can check the default prompt template. The PR summary is then added as a comment to the PR.
openai_api_key
option.
You can also use llama.cpp to run inference on CPU locally. Just install the llama-cpp-python package and run their OpenAI compatible web server as described here with the command:
python3 -m llama_cpp.server --hf_model_repo_id TheBloke/deepseek-coder-6.7B-instruct-GGUF --model 'deepseek-coder-6.7b-instruct.Q4_0.gguf'
Once the local server is running you can set:
diff_suggestion
option is set, the PRReview patchflow will generate suggestions to improve the PR along with the summary.
diff_summary
option. Acceptable values are ‘long’, ‘short’ and ‘none’.
"id": "diffreview"
. Note the use of variables {{other_fields}}
, {{path}}
and {{diff}}
. They are generated by the steps within the PRReview patchflow and are replaced by the actual values during the execution. Also, remember to keep the output format as given in the default prompt since it is used to extract information that is used by the steps after the model response is processed. The following output format is expected at the moment:
diff_suggestion
option is set the expected output format is: