Skip to content

chore(deps): update loader dependencies non-major#234

Open
dreadnode-renovate-bot[bot] wants to merge 1 commit intomainfrom
renovate/loader-deps
Open

chore(deps): update loader dependencies non-major#234
dreadnode-renovate-bot[bot] wants to merge 1 commit intomainfrom
renovate/loader-deps

Conversation

@dreadnode-renovate-bot
Copy link
Copy Markdown
Contributor

@dreadnode-renovate-bot dreadnode-renovate-bot bot commented Apr 19, 2026

This PR contains the following updates:

| Package | Change | Age | Confidence |
|

Generated Summary:

  • Updated pydantic version from 2.12.5 to 2.13.1 in both dyana-requirements-gpu.txt and dyana-requirements.txt.
  • Upgraded peft version from 0.18.1 to 0.19.0 in lora/requirements.txt.
  • These changes may introduce performance improvements and new features available in the updated libraries.
  • Ensure compatibility with existing code after upgrading dependencies.

This summary was generated with ❤️ by rigging

| peft | ==0.18.1==0.19.0 | age | confidence |
| pydantic (changelog) | ==2.12.5==2.13.1 | age | confidence |


Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

huggingface/peft (peft)

v0.19.0

Compare Source

Highlights

This PEFT release contains no less than nine new PEFT methods, described below. It also contains numerous enhancements that should make PEFT more useful to many users.

peft-v0 19 0

New Methods

GraLoRA

@​yeonjoon-jung01 added "GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient Fine-Tuning" to PEFT (#​2851). This method subdivides the base weight into smaller blocks and applies LoRA to those. This more granular adaptation promises to increase expressiveness and improve performance, especially at higher ranks (64+), closing the gap to full fine-tuning.

BD-LoRA

@​Conzel contributed BD-LoRA: "Block-Diagonal LoRA for Eliminating Communication Overhead in Tensor Parallel LoRA Serving" (#​2895). With BD-LoRA, the LoRA weights are implemented in a block-diagonal way. This allows to reduce communication overhead when using tensor parallelism (TP) and thus faster serving.

There is an experiment branch for BD-LoRA support in vLLM: vllm-project/vllm#28136.

Cartridges

Thanks to @​kashif, PEFT now also supports Cartridges (#​2953). The main purpose of this method is to train a prefix to compress a long context to a short size and thus save on tokens. On a low level, this is similar to prefix tuning. The PR also added an example recipe to quickly get started.

PVeRA

"PVeRA: Probabilistic Vector-Based Random Matrix Adaptation" was added to PEFT by @​leofillioux in #​2952. It is an extension of VeRA, a PEFT method that uses weight sharing between layers to be especially parameter efficient. PVeRA builds on top of that by adding a probabilistic element, sampling from the shared parameters and promising better performance overall.

PSOFT

@​fei407 added PSOFT, "Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation", to PEFT in #​3037. Orthogonal fine-tuning techniques like OFT and BOFT are good at preserving the structure and thus capabilities of the underlying base model. PSOFT improves efficiency of this technique by constraining the adaptation to low-rank principal subspace.

Lily

@​yibozhong added Lily: "Low-Rank Interconnected Adaptation across Layers" to PEFT in #​2563. Lily is on the surface similar to LoRA but has a sophisticated parameter sharing scheme. The A parameters are shared blockwise (e.g. 4 consecutive q_proj layers share the same A). There is a pool of B parameters that is shared globally, the actual B's are chosen in a data-dependent way through a router. This allows Lily to use higher ranks than LoRA while maintaining a low trainable parameter count.

PEANuT

In #​3084, "PEANuT: Parameter-Efficient Adaptation with Weight-aware Neural Tweakers" was added to PEFT, again by @​yibozhong. PEANuT adds a small, neural net (so called weight-aware neural tweakers) to the base model. Compared to LoRA, this increases expressivity for the same trainable parameter count or allows to greatly lower the parameter count without sacrificing expressivity. This comes at the expensive of a higher memory requirement for the same parameter count and decreased speed.

TinyLoRA

We have another serial contributor in @​kashif, who also contributed TinyLoRA: "Learning to Reason in 13 Parameters" in #​3024. This is a PEFT method that allows to train an extremely small number of parameters, much lower than what could be achieved even with LoRA rank 1. The paper shows that in particular with reinforcement learning, it can often be enough to train just a few parameters to achieve good results.

AdaMSS

@​LonglongaaaGo added "AdaMSS: Adaptive Multi-Subspace Approach for Parameter-Efficient Fine-Tuning" to PEFT. This method segments the base weights of the model into smaller subspaces that are targeted for fine-tuning. Moreover, it's possible to dynamically assign a lower parameter budget to less important subspaces during training, similar to what AdaLoRA does. This promises to provide higher expressiveness and better generalization than similar PEFT methods.

Enhancements

Convert non-LoRA adapters to LoRA

In #​2939, we added functions to PEFT to allow converting checkpoints of many non-LoRA methods into LoRA checkpoints. This can be useful because many other packages support only LoRA but not other PEFT methods, e.g. Diffusers and vLLM. With the new conversions tools, more PEFT methods than just LoRA can thus be used with those packages. Conversion is lossy but empirical testing showed that with a sufficiently high LoRA rank, the error can be quite low.

LoRA-GA

@​sambhavnoobcoder added a new way to initialize LoRA weights with "LoRA-GA: Low-Rank Adaptation with Gradient Approximation" (#​2926). This allows you to initialize the LoRA weights in a way that aligns the gradients with full fine-tuning and should lead to faster training convergence.

Reducing intruder dimensions

In "LoRA vs Full Fine-tuning: An Illusion of Equivalence", the authors showed that LoRA fine-tuning can introduce so-called "intruder dimensions" which contribute to forgetting. We now have a utility function to remove intruder dimension in PEFT, reduce_intruder_dimension. When calling this on a fine-tuned LoRA model, forgetting should be reduced while the fine-tuned task performance should remain almost the same.

Transformer Engine

In #​3048, @​balvisio added support for Transformer Engine, a quantization method by NVIDIA, to PEFT.

Tensor Parallel Support

In a series of PRs (#​3079, #​3091, #​3096), @​michaelbenayoun added support for Tensor Parallelism to LoRA.

Weight tying improvements

In many LLMs, the embedding and the LM head have tied weights to save on parameter count. This can, however, lead to tricky situations when trying to fine-tune those layers. Through a series of PRs (#​2803, #​2922, #​2870, #​2879, #​3126), we improved the user experience when doing so. Most notably, users can now pass ensure_weight_tying=True to their PEFT config to force weight tying to be upheld. Please check the PEFT weight tying docs for how weight tying is now being handled. Thanks to @​romitjain, @​sambhavnoobcoder, and @​Cursx for their contributions.

Low precsion floating type support

#​3055 makes LoRA work with base models that use very low precision floats like torch.float8_e4m3fn. An example of that would be MiniMax-M2.5.

Zero init for PrefixTuning

#​3128 introduces zero init to Prefix Tuning which, according to our benchmarks, reduced the result variance significantly and yielded good task accuracy without the need for prompt engineering.

LoftQ + int8 quantization

With #​3088 the LoftQ implementation now supports correcting errors for int8 quantization without utilizing activation thresholding alongside the already existing nf4 quantization.

Changes

Removal of Bone

The Bone PEFT method was removed in #​3115. Users are directed to use MiSS instead, which is the improved replacement for Bone. Use this Bone-to-MiSS conversion script if you want to port old Bone checkpoints.

AutoGPTQ and AutoAWQ

These two quantization methods now use GPTQModel as their backend (#​2932) thanks to @​ZX-ModelCloud.

Handling of requires_grad in modules_to_save

Previously, PEFT would enable requires_grad on the original module if the corresponding modules_to_save was disabled. This is almost never desirable and was thus fixed. Although this change is technically backwards-incompatible, it's an extreme niche case, so we don't expect any user to be negatively affected by it.

All Changes

New Contributors

Full Changelog: huggingface/peft@v0.18.1...v0.19.0

pydantic/pydantic (pydantic)

v2.13.1: 2026-04-15

Compare Source

v2.13.1 (2026-04-15)
What's Changed
Fixes

Full Changelog: pydantic/pydantic@v2.13.0...v2.13.1

v2.13.0

Compare Source

GitHub release

The highlights of the v2.13 release are available in the blog post.
Several minor changes (considered non-breaking changes according to our versioning policy)
are also included in this release. Make sure to look into them before upgrading.

This release contains the updated pydantic.v1 namespace, matching version 1.10.26 which includes support for Python 3.14.

What's Changed

See the beta releases for all changes sinces 2.12.

New Features
  • Allow default factories of private attributes to take validated model data by @​Viicos in #​13013
Changes
Fixes
  • Change type of Any when synthesizing _build_sources for BaseSettings.__init__() signature in the mypy plugin by @​Viicos in #​13049
  • Fix model equality when using runtime extra configuration by @​Viicos in #​13062
Packaging
New Contributors

Configuration

📅 Schedule: (UTC)

  • Branch creation
    • At any time (no schedule defined)
  • Automerge
    • At any time (no schedule defined)

🚦 Automerge: Enabled.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

| datasource | package  | from   | to     |
| ---------- | -------- | ------ | ------ |
| pypi       | peft     | 0.18.1 | 0.19.0 |
| pypi       | pydantic | 2.12.5 | 2.13.1 |
@dreadnode-renovate-bot dreadnode-renovate-bot bot added the type/digest Dependency digest updates label Apr 19, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

type/digest Dependency digest updates

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants