WTH is it with Raspberry stock

There are always alternatives. ODroids seem to be more easily available (I found several in stock). Or NUCs or even a simple self built x86/x64 setup with a cheap low power mainboard and CPU.

I can’t find cheap NUCs at all, where are you looking?

1 Like

Actually it might be already over and turning to glut if you trust recent reports :newspaper:

A very recent article about the berries :point_down:

I guess the Raspberry Pi Trading Ltd is just “playing” capitalism :tm: and is preferring long-term revenues from long-term commitments (something that only commercial customers can offer :moneybag:)

Guess they are also “playing” capitalism :tm: :joy:

PS.: Selling a Raspberry Pi 4-4GB for $250, write a PM if you are interested sold :stuck_out_tongue_winking_eye:

1 Like

I see a few NUC i3 systems (typically no disk and no RAM) for less than $100 USD on eBay.

Not too sure about NUCs (you probably have to hunt around used ones), I was mostly looking at self built x86 setups using something like a passive cooled Celeron on a H410 Express chipset. Easy to source, you’re looking at something like €/$ 200-250 for a full setup (MB, CPU, a little RAM, SSD, PSU). Obviously more expensive than a barebones Pi, but also a lot more powerful and a lot easier to get.

1 Like

With some luck some Intel Compute Stick type devices can be bought used for quite a bargain (or for $500 new on amazon). They are low power, tiny :pinching_hand: all in one devices. The ones I use for home assistant - smaller than berries, are powered with 5v micro usb and have no fan (aka silent) :no_bell:

1 Like

** Causally looks to the side of my desk **


:money_mouth_face:

1 Like

It’s ironic, for years any discussions of “the Pi needs … feature” that was not for the lowest common denominator, the common refrain on the boards (including the mods) would dismiss the request along the lines “it’s not needed for the core educational mission/market.”

When it comes down to it, they serviced those “non-core” needs first. I’m actually am OK with that, the cash ultimately can fund the core mission in other ways, but just admit the hardware sales and the customer base has moved well beyond that.

Which indeed caused the raspberry to suddenly be very interesting for many types of (commercial) applications/solutions. So in the end because the community/makers screamed “we need more power, more ram, faster ethernet” and their wishes became true companies started to build products around it causing the shortage of raspberries for ordinary makers :man_shrugging:

It’s said some poor souls are already starting buying products (they don’t need) just to scrape a RPI out of it :man_facepalming:

Nothing sudden about it. I doubt the bulk of Pi sales have been to the “core” audience since the Pi3, maybe back to the Pi2.

I just found the “core mission” excuse more than a little lame.

For us the biggest constraint on the Pi has been lack of HW encryption support. In a world of tls everywhere it is (and was 3 years ago) a major limitation on the Pi4 when you need to move a lot of data. Luckily other SBCs don’t share that particular limitation.

Just a few hundred boards here though. In some ways it was been fortuitous in that we had already largely moved away from the Pi because of it.

What are you exactly looking for? An actual intel NUC or just something small? And of course, new or used?

PS.: Due to the issues of sourcing a RPi, I gave up on any project which requires one… It is really a shame how prices became inflated.

Nothing special, just tinker material. I’ve exhausted HA at this point

:rofl:

Time to redo your router then…

I most of the time look at cheap Thin Clients or Desktop Minis. The 6th gen i5 ones are going for less than 200 Euro with 8 or 16GB of RAM.

Look at the HP Elite X2 1012 G1 or G2, they have a nice screen resolution for a small amount of money as second hand units. Another user pointed out recently that the G1 without the screen running consumes 6 or 7W with HA running on it.

1 Like

I recently got 2x HP ProDesk 400 G1 SFF - i3-4160 CPU @ 3.60GHz - 1TB HDD - 8GB RAM ÂŁ25 each as just hdmi/tv browser desktops but the SFF is amazing value with a really good bronze PSU but still would like a tiny RpiZ2W if I could get one.

Anyone used OrangePis? I ordered one from Aliexpress with a case, fan, etc for $50 and got it in about 10 days and took the card out of my pi3 and plugged it into the orangePi and it booted right up…

https://www.aliexpress.us/item/3256804401031262.html?spm=a2g0o.order_list.0.0.1e731802cMxxuC&gatewayAdapt=glo2usa&_randl_shipto=US

1 Like

I haven’t for quite a while as used to call Banana & Orange the strange fruit as often just had some really hacky BSP image that would boot but often there would be some sort of problem.

I think they are much better than they used to be but have a preference for Radxa or Pine64.
The Rock5 RK3588 is absolutely amazing and is true entry desktop level and absolutely smashes a Pi4 in performance by a very big margin but again its new and its a long way off 100% mainline linux.

OKdo x Radxa ROCK 4 Model SE 4GB Single Board Computer Rockchip RK3399-T ARM Cortex-A72 - OKdo OkDo aka RS do a Rock4 that I think is finally 100% with the new hantro additions 100% mainline and a testiment to how long smaller comunities can take even though the evolution of the Pi also was not breakneck, but also shows the economies of scale Raspberry have when in stock with the price.

I stopped looking for the raspi4 and bought a pi400, funny thing is sure you get a keyboard and a casing and a lot better heat solution, for less money. Best thing of all was they were easily to obtain. Very happy with it.

Yeah. You do lose some of the modularity of the standard Pi boards (eg most HATs require adapters, cases, and other solutions designed around the board layout) but I’ve been really happy with my rpi400 for the all-in-one design that it is.

I have just been playing with a Raspberry alternative that completely blows the Pi4 away in so many ways and relatively perfect for a really strong HA server especially if you where going to do centralised ASR/TTS voice control.

Rock5b 2x m.2 running OPenAi’s whisper it was 5x faster than a pi4.
Its an ML powerhouse though as the CPU is very strong true entry level desktop 1.5watt idle 3-5 watt 100% load.
Also the Mali G610 is approx as strong as the CPU for ML and I finally got ArmNN working on a SBC with a Mail GPU ArmNN is optimised for both Arm Cpu’s & Gpu’s.
The tutorial is here

https://developer.arm.com/documentation/102603/2108/Device-specific-installation/Install-on-Odroid-N2-Plus

Its a fairly easy install the model for ASR is pretty bad but its purely to test optimisation and load.
On a RockPi5 RK3588 results are…

rock@rock-5b:~/workspace/armnn/python/pyarmnn/examples/speech_recognition$ python3 run_audio_file.py --audio_file_path samples/hp0.wav --model_file_path tflite_int8/wav2letter_int8.tflite --preferred_backends CpuAcc

Inference End: Avg CPU%=44.22205882352939
Runtime=0:00:05.506307
Realtime=x49.63404910042248
rock@rock-5b:~/workspace/armnn/python/pyarmnn/examples/speech_recognition$ Realtime=x49.63404910042248rock@rock-5b:~/workspace/armnn/python/pyarmnn/examples/speech_recognition$ python3 run_audio_file.py --audio_file_path samples/hp0.wav --model_file_path tflite_int8/wav2letter_int8.tflite --preferred_backends GpuAcc

Inference End: Avg CPU%=6.852573529411753
Runtime=0:00:06.292449
Realtime=x43.43305952896877

As you will see you just switch between CpuAcc or GpuAcc for the --preferred_backends with a Pi with no Mali its CpuAcc only CpuAcc means its been heavily Neon optimised, CpuRef means no Neon and 1 core and oh boy!

Dunno what it is with software side of Arm as there example has prob one of the most load heavy MFCC audio preprocessing I have ever seen and it makes evaluation near impossible and the majority of the load isn’t ArmNN but preprocessing audio.
I have hacked the code so it preprocesses all the audio 1st then loads that into the model so we are only looking at model performance not MFCC code.

# Copyright © 2021 Arm Ltd and Contributors. All rights reserved.
# SPDX-License-Identifier: MIT

"""Automatic speech recognition with PyArmNN demo for processing audio clips to text."""

import sys
import os
import numpy as np
import psutil
import soundfile as sf
script_dir = os.path.dirname(__file__)
sys.path.insert(1, os.path.join(script_dir, '..', 'common'))

from argparse import ArgumentParser
from network_executor import ArmnnNetworkExecutor
from utils import prepare_input_data
from audio_capture import AudioCaptureParams, capture_audio
from audio_utils import decode_text, display_text
from wav2letter_mfcc import Wav2LetterMFCC, W2LAudioPreprocessor
from mfcc import MFCCParams
from datetime import datetime, timedelta

# Model Specific Labels
labels = {0: 'a', 1: 'b', 2: 'c', 3: 'd', 4: 'e', 5: 'f', 6: 'g', 7: 'h', 8: 'i', 9: 'j', 10: 'k', 11: 'l', 12: 'm',
          13: 'n',
          14: 'o', 15: 'p', 16: 'q', 17: 'r', 18: 's', 19: 't', 20: 'u', 21: 'v', 22: 'w', 23: 'x', 24: 'y',
          25: 'z',
          26: "'", 27: ' ', 28: '$'}


def time_float(result):
    seconds = int(result)
    microseconds = int((result * 1000000) % 1000000)
    output = timedelta(0, seconds, microseconds)
    return output

def parse_args():
    parser = ArgumentParser(description="ASR with PyArmNN")
    parser.add_argument(
        "--audio_file_path",
        required=True,
        type=str,
        help="Path to the audio file to perform ASR",
    )
    parser.add_argument(
        "--model_file_path",
        required=True,
        type=str,
        help="Path to ASR model to use",
    )
    parser.add_argument(
        "--preferred_backends",
        type=str,
        nargs="+",
        default=["GpuAcc", "CpuAcc", "CpuRef"],
        help="""List of backends in order of preference for optimizing
        subgraphs, falling back to the next backend in the list on unsupported
        layers. Defaults to [GpuAcc, CpuAcc, CpuRef]""",
    )
    return parser.parse_args()


def main(args, network, input_data):

    current_r_context = ""
    is_first_window = True
    avg_cpu = 0.0
    for input_chunk in input_data:
        # Run inference
        output_result = network.run([input_chunk])

        # Slice and Decode the text, and store the right context
        current_r_context, text = decode_text(is_first_window, labels, output_result)

        is_first_window = False

        display_text(text)
        runtime = datetime.now() - starttime
        print(" " + str(runtime))
        avg_cpu = avg_cpu + psutil.cpu_percent()

    print(current_r_context, flush=True)
    print("Inference End: Avg CPU%=" + str(avg_cpu / len(input_data)))
    return runtime

if __name__ == "__main__":
    args = parse_args()
    # Create the ArmNN inference runner
    network = ArmnnNetworkExecutor(args.model_file_path, args.preferred_backends)
    # Read command line args
    audio_file = args.audio_file_path
    sf_data, samplerate = sf.read(audio_file)
    sf_secs = time_float((len(sf_data) / samplerate))
    # Specify model specific audio data requirements
    audio_capture_params = AudioCaptureParams(dtype=np.float32, overlap=31712, min_samples=47712, sampling_freq=16000,
                                              mono=True)

    buffer = capture_audio(audio_file, audio_capture_params)
    # Extract features and create the preprocessor

    mfcc_params = MFCCParams(sampling_freq=16000, num_fbank_bins=128, mel_lo_freq=0, mel_hi_freq=8000,
                             num_mfcc_feats=13, frame_len=512, use_htk_method=False, n_fft=512)

    wmfcc = Wav2LetterMFCC(mfcc_params)
    preprocessor = W2LAudioPreprocessor(wmfcc, model_input_size=296, stride=160)   
    print("Processing Audio Frames...")
    input_data = []

    for audio_data in buffer:
        # Prepare the input Tensors
        input_data.append(prepare_input_data(audio_data, network.get_data_type(), network.get_input_quantization_scale(0),
                                        network.get_input_quantization_offset(0), preprocessor))
                                        
        
  
    
    starttime = datetime.now()
    runtime = main(args, network, input_data)
    print("Runtime=" + str(runtime))
    print("Realtime=x" + str(sf_secs / runtime))
    starttime = datetime.now()
    runtime = main(args, network, input_data)
    print("Runtime=" + str(runtime))
    print("Realtime=x" + str(sf_secs / runtime))

Both the model and the manner it works isn’t good but this is a perf evaluation which cpu is x50 realtime / gpu x45
There are no Mesa drivers for the Mali G610 and its using a Rockchip blob that I think is underperforming slightly with load of about 70% its a new gen3 vallhall so finger crossed like others it will be added to Mesa and think it will be maybe 30% stronger than the CPU.
Still waiting for dtb updates and driver for the NPU but at 6Tops likely much stronger than cpu/gpu.

SoC is very new and my board was an early adopter discount as the images are very new but so many Vendors has adopted the RK3358/RK3358s that its likely its going to be common with strong support.

I don’t understand. When I looked up the Rock 5B on Amazon, it shows a price about 10-20 times what a RPi 4B costs. Did I look up the wrong thing?