Go here to Read this Fast! Crypto millionaires population up 95% in one year, survey shows
Originally appeared here:
Crypto millionaires population up 95% in one year, survey shows
Go here to Read this Fast! Crypto millionaires population up 95% in one year, survey shows
Originally appeared here:
Crypto millionaires population up 95% in one year, survey shows
Go here to Read this Fast! New presale coin may leave Toncoin and Solana in the dust
Originally appeared here:
New presale coin may leave Toncoin and Solana in the dust
Go here to Read this Fast! Spot BTC ETFs surpass $18b amid increased investor confidence
Originally appeared here:
Spot BTC ETFs surpass $18b amid increased investor confidence
US-based Bitcoin ETFs saw a net inflow of $202.6 million on Aug. 26, according to SoSo Value data. Ticker Sponsor Prem./Dsc. Aug. 26 Inflow Net Inflow BTC Share Value Traded IBIT BlackRock -0.13% $224.06M $20.93B 1.81% $704.81M GBTC Grayscale +0.02% $0.00 -$19.73B 1.15% $141.54M FBTC Fidelity -0.22% -$8.33M $9.88B 0.91% $164.37M ARKB Ark -0.23% $0.00 […]
The post BlackRock and Grayscale own 2.96% of Bitcoin circulating supply amid $202 million net inflow appeared first on CryptoSlate.
Originally appeared here:
BlackRock and Grayscale own 2.96% of Bitcoin circulating supply amid $202 million net inflow
The recent price action suggested that the bulls gained control, but strong resistance at $162 could prevent an immediate rally.
Traders should closely monitor the $162 resistance, as a break
The post Solana: Why THIS price level could be crucial for SOL’s upcoming rally appeared first on AMBCrypto.
Go here to Read this Fast! Solana: Why THIS price level could be crucial for SOL’s upcoming rally
Originally appeared here:
Solana: Why THIS price level could be crucial for SOL’s upcoming rally
ViaBTC is now adding a new coin, BEL (Bellscoin), to its merged mining options, enhancing mining rewards for Litecoin miners. By joining ViaBTC to mine LTC, you’ll receive triple rewards: $LTC, $DOGE,
The post Mine LTC and get BEL (Bellscoin) with Zero Fee for the first month on ViaBTC appeared first on AMBCrypto.
Originally appeared here:
Mine LTC and get BEL (Bellscoin) with Zero Fee for the first month on ViaBTC
Analyst predicted that if BNB can break through the current falling wedge and a significant resistance, it could reach a new peak.
On-chain metrics, however, signal that these critical breaches
The post BNB set to surge to $900 IF two key breakthroughs occur appeared first on AMBCrypto.
Go here to Read this Fast! BNB set to surge to $900 IF two key breakthroughs occur
Originally appeared here:
BNB set to surge to $900 IF two key breakthroughs occur
Nvidia’s success has previously sparked moves in GPU and AI-focused crypto tokens in the spot market.
Upcoming US economic data could fuel prevailing sentiment and add uncertainty to the broade
The post Nvidia’s Q2 results: A turning point for Artificial Superintelligence Alliance? appeared first on AMBCrypto.
Originally appeared here:
Nvidia’s Q2 results: A turning point for Artificial Superintelligence Alliance?
Avalanche unlocks 1.33% of AVAX’s max supply.
Analysis say the bottom might be in.
Solana [SOL], Worldcoin [WLD], and Avalanche [AVAX] led the charge for altcoin token unlocks with AVAX rele
The post Will the release of 9.54M AVAX tokens impact the price of Avalanche? appeared first on AMBCrypto.
Go here to Read this Fast! Will the release of 9.54M AVAX tokens impact the price of Avalanche?
Originally appeared here:
Will the release of 9.54M AVAX tokens impact the price of Avalanche?
Image generation tools are hotter than ever, and they’ve never been more powerful. Models like PixArt Sigma and Flux.1 are leading the charge, thanks to their open weight models and permissive licenses. This setup allows for creative tinkering, including training LoRAs without sharing data outside your computer.
However, working with these models can be challenging if you’re using older or less VRAM-rich GPUs. Typically, there’s a trade-off between quality, speed, and VRAM usage. In this blog post, we’ll focus on optimizing for speed and lower VRAM usage while maintaining as much quality as possible. This approach works exceptionally well for PixArt due to its smaller size, but results might vary with Flux.1. I’ll share some alternative solutions for Flux.1 at the end of this post.
Both PixArt Sigma and Flux.1 are transformer-based, which means they benefit from the same quantization techniques used by large language models (LLMs). Quantization involves compressing the model’s components to use less memory. It allows you to keep all model components in GPU VRAM simultaneously, leading to faster generation speeds compared to methods that move weights between the GPU and CPU, which can slow things down.
Let’s dive into the setup!
First, ensure you have Nvidia drivers and Anaconda installed.
Next, create a python environment and install all the main requirements:
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
Then the Diffusers and Quanto libs:
pip install pillow==10.3.0 loguru~=0.7.2 optimum-quanto==0.2.4 diffusers==0.30.0 transformers==4.44.2 accelerate==0.33.0 sentencepiece==0.2.0
Here’s a simple script to get you started for PixArt-Sigma:
from optimum.quanto import qint8, qint4, quantize, freeze
from diffusers import PixArtSigmaPipeline
import torch
pipeline = PixArtSigmaPipeline.from_pretrained(
"PixArt-alpha/PixArt-Sigma-XL-2-1024-MS", torch_dtype=torch.float16
)
quantize(pipeline.transformer, weights=qint8)
freeze(pipeline.transformer)
quantize(pipeline.text_encoder, weights=qint4, exclude="proj_out")
freeze(pipeline.text_encoder)
pipe = pipeline.to("cuda")
for i in range(2):
generator = torch.Generator(device="cpu").manual_seed(i)
prompt = "Cyberpunk cityscape, small black crow, neon lights, dark alleys, skyscrapers, futuristic, vibrant colors, high contrast, highly detailed"
image = pipe(prompt, height=512, width=768, guidance_scale=3.5, generator=generator).images[0]
image.save(f"Sigma_{i}.png")
Understanding the Script: Here are the major steps of the implementation
Save the script and run it in your environment. You should see an image generated based on the prompt “Cyberpunk cityscape, small black crow, neon lights, dark alleys, skyscrapers, futuristic, vibrant colors, high contrast, highly detailed” saved as sigma_1.png. Generation takes 6 seconds on a RTX 3080 GPU.
You can achieve similar results with Flux.1 Schnell, despite its additional components, but it would necessitate more aggressive quantization, which would negatively lower quality (Unless you have access to more VRAM, say 16 or 25 Gigs)
import torch
from optimum.quanto import qint2, qint4, quantize, freeze
from diffusers.pipelines.flux.pipeline_flux import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
quantize(pipe.text_encoder, weights=qint4, exclude="proj_out")
freeze(pipe.text_encoder)
quantize(pipe.text_encoder_2, weights=qint2, exclude="proj_out")
freeze(pipe.text_encoder_2)
quantize(pipe.transformer, weights=qint4, exclude="proj_out")
freeze(pipe.transformer)
pipe = pipe.to("cuda")
for i in range(10):
generator = torch.Generator(device="cpu").manual_seed(i)
prompt = "Cyberpunk cityscape, small black crow, neon lights, dark alleys, skyscrapers, futuristic, vibrant colors, high contrast, highly detailed"
image = pipe(prompt, height=512, width=768, guidance_scale=3.5, generator=generator, num_inference_steps=4).images[0]
image.save(f"Schnell_{i}.png")
We can see that quantization of the text encoder to qint2 and vision transformer to qint8 might be too aggressive, which had a significant impact on the quality for Flux.1 Schnell
Here are some alternatives for running Flux.1 Schnell:
If PixArt-Sigma is not sufficient for your needs and you don’t have enough VRAM to run Flux.1 at sufficient quality you have two main options:
I had a little fun deploying PixArt Sigma on an older machine I have. Here is a brief summary of how I went about it:
First the list of component:
Now, how do they all work together? Here’s a simple rundown:
Service URL: https://image-generation-app-340387183829.europe-west1.run.app
By quantizing the model components, we can significantly reduce VRAM usage while maintaining good image quality and improving generation speed. This method is particularly effective for models like PixArt Sigma. For Flux.1, while the results might be mixed, the principles of quantization remain applicable.
References:
Running PixArt-Σ/Flux.1 Image Generation on Lower VRAM GPUs: A Short Tutorial in Python was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.
Originally appeared here:
Running PixArt-Σ/Flux.1 Image Generation on Lower VRAM GPUs: A Short Tutorial in Python