Candle: Torch Replacement in Rust

candle

discord server
Latest version
Documentation
License

Candle is a minimalist ML framework for Rust with a focus on performance (including GPU support)
and ease of use. Try our online demos:
whisper,
llama2.

let a=Tensor::randn(0f32, 1., (2, 3), &Device::Cpu)?;
let b=Tensor::randn(0f32, 1., (3, 4), &Device::Cpu)?;

let c=a.matmul(&b)?;
println!("{c}");

Check out our examples

Check out our examples:

  • Whisper: speech recognition model.
  • Llama and Llama-v2: general LLM.
  • Falcon: general LLM.
  • Bert: useful for sentence embeddings.
  • StarCoder: LLM specialized to code
    generation.
  • Stable Diffusion: text to
    image generative model.
  • DINOv2: computer vision model trained
    using self-supervision (can be used for imagenet classification, depth
    evaluation, segmentation).

Run them using the following commands:

cargo run --example whisper --release
cargo run --example llama --release
cargo run --example falcon --release
cargo run --example bert --release
cargo run --example bigcode --release
cargo run --example stable-diffusion --release -- --prompt "a rusty robot holding a fire torch"
cargo run --example dinov2 --release -- --image path/to/myinput.jpg

In order to use CUDA add --features cuda to the example command line. If
you have cuDNN installed, use --features cudnn for even more speedups.

There are also some wasm examples for whisper and
llama2.c. You can either build them with
trunk or try them online:
whisper,
llama2.

For llama2, run the following command to retrieve the weight files and start a
test server:

cd candle-wasm-examples/llama2-c
wget https://huggingface.co/spaces/lmz/candle-llama2/resolve/main/model.bin
wget https://huggingface.co/spaces/lmz/candle-llama2/resolve/main/tokenizer.json
trunk serve --release --public-url /candle-llama2/ --port 8081

And then head over to
http://localhost:8081/candle-llama2.

Features

  • Simple syntax, looks and feels like PyTorch.
  • Backends.
    • Optimized CPU backend with optional MKL support for x86 and Accelerate for macs.
    • CUDA backend for efficiently running on GPUs, multiple GPU distribution via NCCL.
    • WASM support, run your models in a browser.
  • Included models.
    • LLMs: Llama v1 and v2, Falcon, StarCoder.
    • Whisper (multi-lingual support).
    • Stable Diffusion.
    • Computer Vision: DINOv2.
  • Serverless (on CPU), small and fast deployments.
  • Quantization support using the llama.cpp quantized types.

How to use

Cheatsheet:

Using PyTorchUsing Candle
Creationtorch.Tensor([[1, 2], [3, 4]])Tensor::new(&[[1f32, 2.], [3., 4.]], &Device::Cpu)?
Creationtorch.zeros((2, 2))Tensor::zeros((2, 2), DType::F32, &Device::Cpu)?
Indexingtensor[:, :4]tensor.i((.., ..4))?
Operationstensor.view((2, 2))tensor.reshape((2, 2))?
Operationsa.matmul(b)a.matmul(&b)?
Arithmetica + b&a + &b
Devicetensor.to(device="cuda")tensor.to_device(&Device::Cuda(0))?
Dtypetensor.to(dtype=torch.float16)tensor.to_dtype(&DType::F16)?
Savingtorch.save({"A": A}, "model.bin")candle::safetensors::save(&HashMap::from([("A", A)]), "model.safetensors")?
Loadingweights=torch.load("model.bin")candle::safetensors::load("model.safetensors", &device)

Structure

FAQ

Why should I use Candle?

Candle’s core goal is to make serverless inference possible. Full machine learning frameworks like PyTorch
are very large, which makes creating instances on a cluster slow. Candle allows deployment of lightweight
binaries.

Secondly, Candle lets you remove Python from production workloads. Python overhead can seriously hurt performance,
and the GIL is a notorious source of headaches.

Finally, Rust is cool! A lot of the HF ecosystem already has Rust crates, like safetensors and tokenizers.

Other ML frameworks

  • dfdx is a formidable crate, with shapes being included
    in types. This prevents a lot of headaches by getting the compiler to complain about shape mismatches right off the bat.
    However, we found that some features still require nightly, and writing code can be a bit daunting for non rust experts.

    We’re leveraging and contributing to other core crates for the runtime so hopefully both crates can benefit from each
    other.

  • burn is a general crate that can leverage multiple backends so you can choose the best
    engine for your workload.

  • tch-rs Bindings to the torch library in Rust. Extremely versatile, but they
    bring in the entire torch library into the runtime. The main contributor of tch-rs is also involved in the development
    of candle.

Common Errors

Missing symbols when compiling with the mkl feature.

If you get some missing symbols when compiling binaries/tests using the mkl
or accelerate features, e.g. for mkl you get:

 =note: /usr/bin/ld: (....o): in function `blas::sgemm':
          .../blas-0.22.0/src/lib.rs:1944: undefined reference to `sgemm_' collect2: error: ld returned 1 exit status

 =note: some `extern` functions couldn't be found; some native libraries may need to be installed or have their path specified
 =note: use the `-l` flag to specify native libraries to link
 =note: use the `cargo:rustc-link-lib` directive to specify the native libraries to link with Cargo

or for accelerate:

Undefined symbols for architecture arm64:
            "_dgemm_", referenced from:
                candle_core::accelerate::dgemm::h1b71a038552bcabe in libcandle_core...
            "_sgemm_", referenced from:
                candle_core::accelerate::sgemm::h2cf21c592cba3c47 in libcandle_core...
          ld: symbol(s) not found for architecture arm64

This is likely due to a missing linker flag that was needed to enable the mkl library. You
can try adding the following for mkl at the top of your binary:

extern crate intel_mkl_src;

or for accelerate:

extern crate accelerate_src;

Cannot run llama example : access to source requires login credentials

Error: request error: https://huggingface.co/meta-llama/Llama-2-7b-hf/resolve/main/tokenizer.json: status code 401

This is likely because you’re not permissioned for the llama-v2 model. To fix
this, you have to register on the huggingface-hub, accept the llama-v2 model
conditions
, and set up your
authentication token. See issue
#350 for more details.

Tracking down errors

You can set RUST_BACKTRACE=1 to be provided with backtraces when a candle
error is generated.

Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here

Related Posts
Apple introduceert ondersteuning voor unlisted apps op App Store thumbnail

Apple introduceert ondersteuning voor unlisted apps op App Store

Tweakers maakt gebruik van cookies Tweakers is onderdeel van DPG Media en maakt gebruik van cookies, JavaScript en vergelijkbare technologie om je onder andere een optimale gebruikerservaring te bieden. Functionele en analytische cookies die door Tweakers zelf geplaatst worden, worden gebruikt om de website goed te laten functioneren, bezoekersstatistieken bij te houden en a/b-testen uit…
Read More
Nvidia said to be prepping Blackwell GPUs for Chinese market thumbnail

Nvidia said to be prepping Blackwell GPUs for Chinese market

Comment US trade restrictions on the sale of AI accelerators to China haven't detered Nvidia from bringing its latest Blackwell architecture to the Middle Kingdom. According to a report citing unnamed sources, Nvidia is preparing yet another GPU for the Chinese market that is designed to slip under the US Commerce Department's performance limits. The
Read More
Bei Android-Updates ist jetzt Samsung die beste Wahl thumbnail

Bei Android-Updates ist jetzt Samsung die beste Wahl

Du willst ein Smartphone kaufen und lange auf dem aktuellsten Stand bleiben? Das ermöglicht Samsung jetzt so gut wie kein anderer großer OEM. Samsung bestätigt die beste Update-Garantie der großen Android-OEMs. 4 neue One UI und Android OS Upgrades gibt es für viele Geräte. Man hat sogar an das eigene Smartwatch-Portfolio gedacht. Es hat sich…
Read More
Pixar Blasts Disney for Censoring Its LGBTQ Content thumbnail

Pixar Blasts Disney for Censoring Its LGBTQ Content

Screenshot: PixarEmployees of Pixar have sent a scathing letter to their parent company Disney, accusing it of censoring virtually all LGBTQIA+ content from Pixar’s films. The letter comes one day after Disney CEO Bob Chapek claimed the company’s leaders were opposed to Florida’s infamous “Don’t Say Gay” bill, in which schools will be forced to…
Read More
10 อันดับ มือถือที่มีกล้องดีที่สุด จาก DxOMark (อัปเดทใหม่ 2021) thumbnail

10 อันดับ มือถือที่มีกล้องดีที่สุด จาก DxOMark (อัปเดทใหม่ 2021)

10 อันดับ มือถือที่มีกล้องดีที่สุดจากการทดสอบและวัดผลคะแนนโดย DxOMark ซึ่งเป็นบริษัทและเว็บไซต์ที่เป็นมาตรฐานอิสระที่ประเมินเลนส์กล้องของสมาร์ทโฟนและกล้องทางวิทยาศาสตร์ ไปดูกันว่าในปีนี้จนถึงวันนี้ มีรุ่นไหนที่ติดท็อปบ้าง ต้องบอกเลยว่ามีแต่รุ่นกล้องเทพๆ ทั้งนั้น! HUAWEI P50 ProXiaomi Mi 11 UltraHUAWEI Mate 40 Pro+Apple iPhone 13 ProHUAWEI Mate 40 ProAsus Smartphone for Snapdragon InsidersHUAWEI P40 ProOPPO Find X3 Provivo X50 Pro+Apple iPhone 13 miniHUAWEI P50 Pro HUAWEI P50 Pro เป็นสมาร์ทโฟนระดับไฮเอนด์ P-series ล่าสุดของ HUAWEI ประกอบด้วยกล้องหลักที่มีเซ็นเซอร์ขนาดใหญ่ที่มีความกว้างพิเศษ 13 มม. กล้องเทเลโฟโต้ 90 มม., กล้อง Monochrome…
Read More

Review of the series Game of Squid. He is brutal, Asian strange, and different from others. Why is he breaking records?

Plusy Výborný scénář Herecké výkony Sociálně ekonomické téma Minusy Pro někoho příliš brutální Občas pokulhávající logika Asi každého někdy v životě napadlo: „Co kdybych vyhrál spoustu peněz?" Ale dali byste v sázku svůj život? Tohle je naprostý základ nového jihokorejského seriálu Hra na oliheň (Squid Game), který okupuje první místo sledovanosti Netflixu po celém světě. Stovky zadlužených hráčů přijme podivnou pozvánku k…
Read More
Index Of News
Total
0
Share