Upgrade dependencies and evaluate breaking changes using LLMs
patchwork DependencyUpgrade
by default you will need to provide the openai_api_key
and the github_api_key
you can pass them as arguments:
patchwork DependencyUpgrade openai_api_key=<Your_API_KEY> github_api_key=<Your_GH_Token>
gpt-3.5-turbo
to update your package manager file. You can check the default prompt template. The fixed package manager file is then committed to the repository under a new branch and finally a pull request is created for the user to review and merge the changes.
openai_api_key
option.
You can also use llama.cpp to run inference on CPU locally. Just install the llama-cpp-python package and run their OpenAI compatible web server as described here with the command:
python3 -m llama_cpp.server --hf_model_repo_id TheBloke/deepseek-coder-6.7B-instruct-GGUF --model 'deepseek-coder-6.7b-instruct.Q4_0.gguf'
Once the local server is running you can set:
analyze_impact
option can be set to enable an impact analysis of the library upgrades as follows:
branch_prefix
, or disabling the creation of new branches with disable_branch
(commits will be made on the current branch). You can also disable PR creation with disable_pr
or force push commits to existing PR with force_pr_creation
.
"id": "depupgrade"
. Note the use of variables {{Updates}}
and {{PackageManagerFile}}
. They are generated by the steps within the DependencyUpgrade patchflow and replaced by the actual values during the execution. The expected output response is complete content of the package manager file with the libraries updated to their fixed versions.