WTH is it with Raspberry stock

I recently got 2x HP ProDesk 400 G1 SFF - i3-4160 CPU @ 3.60GHz - 1TB HDD - 8GB RAM £25 each as just hdmi/tv browser desktops but the SFF is amazing value with a really good bronze PSU but still would like a tiny RpiZ2W if I could get one.

Anyone used OrangePis? I ordered one from Aliexpress with a case, fan, etc for $50 and got it in about 10 days and took the card out of my pi3 and plugged it into the orangePi and it booted right up…

https://www.aliexpress.us/item/3256804401031262.html?spm=a2g0o.order_list.0.0.1e731802cMxxuC&gatewayAdapt=glo2usa&_randl_shipto=US

1 Like

I haven’t for quite a while as used to call Banana & Orange the strange fruit as often just had some really hacky BSP image that would boot but often there would be some sort of problem.

I think they are much better than they used to be but have a preference for Radxa or Pine64.
The Rock5 RK3588 is absolutely amazing and is true entry desktop level and absolutely smashes a Pi4 in performance by a very big margin but again its new and its a long way off 100% mainline linux.

OKdo x Radxa ROCK 4 Model SE 4GB Single Board Computer Rockchip RK3399-T ARM Cortex-A72 - OKdo OkDo aka RS do a Rock4 that I think is finally 100% with the new hantro additions 100% mainline and a testiment to how long smaller comunities can take even though the evolution of the Pi also was not breakneck, but also shows the economies of scale Raspberry have when in stock with the price.

I stopped looking for the raspi4 and bought a pi400, funny thing is sure you get a keyboard and a casing and a lot better heat solution, for less money. Best thing of all was they were easily to obtain. Very happy with it.

Yeah. You do lose some of the modularity of the standard Pi boards (eg most HATs require adapters, cases, and other solutions designed around the board layout) but I’ve been really happy with my rpi400 for the all-in-one design that it is.

I have just been playing with a Raspberry alternative that completely blows the Pi4 away in so many ways and relatively perfect for a really strong HA server especially if you where going to do centralised ASR/TTS voice control.

Rock5b 2x m.2 running OPenAi’s whisper it was 5x faster than a pi4.
Its an ML powerhouse though as the CPU is very strong true entry level desktop 1.5watt idle 3-5 watt 100% load.
Also the Mali G610 is approx as strong as the CPU for ML and I finally got ArmNN working on a SBC with a Mail GPU ArmNN is optimised for both Arm Cpu’s & Gpu’s.
The tutorial is here

https://developer.arm.com/documentation/102603/2108/Device-specific-installation/Install-on-Odroid-N2-Plus

Its a fairly easy install the model for ASR is pretty bad but its purely to test optimisation and load.
On a RockPi5 RK3588 results are…

rock@rock-5b:~/workspace/armnn/python/pyarmnn/examples/speech_recognition$ python3 run_audio_file.py --audio_file_path samples/hp0.wav --model_file_path tflite_int8/wav2letter_int8.tflite --preferred_backends CpuAcc

Inference End: Avg CPU%=44.22205882352939
Runtime=0:00:05.506307
Realtime=x49.63404910042248
rock@rock-5b:~/workspace/armnn/python/pyarmnn/examples/speech_recognition$ Realtime=x49.63404910042248rock@rock-5b:~/workspace/armnn/python/pyarmnn/examples/speech_recognition$ python3 run_audio_file.py --audio_file_path samples/hp0.wav --model_file_path tflite_int8/wav2letter_int8.tflite --preferred_backends GpuAcc

Inference End: Avg CPU%=6.852573529411753
Runtime=0:00:06.292449
Realtime=x43.43305952896877

As you will see you just switch between CpuAcc or GpuAcc for the --preferred_backends with a Pi with no Mali its CpuAcc only CpuAcc means its been heavily Neon optimised, CpuRef means no Neon and 1 core and oh boy!

Dunno what it is with software side of Arm as there example has prob one of the most load heavy MFCC audio preprocessing I have ever seen and it makes evaluation near impossible and the majority of the load isn’t ArmNN but preprocessing audio.
I have hacked the code so it preprocesses all the audio 1st then loads that into the model so we are only looking at model performance not MFCC code.

# Copyright © 2021 Arm Ltd and Contributors. All rights reserved.
# SPDX-License-Identifier: MIT

"""Automatic speech recognition with PyArmNN demo for processing audio clips to text."""

import sys
import os
import numpy as np
import psutil
import soundfile as sf
script_dir = os.path.dirname(__file__)
sys.path.insert(1, os.path.join(script_dir, '..', 'common'))

from argparse import ArgumentParser
from network_executor import ArmnnNetworkExecutor
from utils import prepare_input_data
from audio_capture import AudioCaptureParams, capture_audio
from audio_utils import decode_text, display_text
from wav2letter_mfcc import Wav2LetterMFCC, W2LAudioPreprocessor
from mfcc import MFCCParams
from datetime import datetime, timedelta

# Model Specific Labels
labels = {0: 'a', 1: 'b', 2: 'c', 3: 'd', 4: 'e', 5: 'f', 6: 'g', 7: 'h', 8: 'i', 9: 'j', 10: 'k', 11: 'l', 12: 'm',
          13: 'n',
          14: 'o', 15: 'p', 16: 'q', 17: 'r', 18: 's', 19: 't', 20: 'u', 21: 'v', 22: 'w', 23: 'x', 24: 'y',
          25: 'z',
          26: "'", 27: ' ', 28: '$'}


def time_float(result):
    seconds = int(result)
    microseconds = int((result * 1000000) % 1000000)
    output = timedelta(0, seconds, microseconds)
    return output

def parse_args():
    parser = ArgumentParser(description="ASR with PyArmNN")
    parser.add_argument(
        "--audio_file_path",
        required=True,
        type=str,
        help="Path to the audio file to perform ASR",
    )
    parser.add_argument(
        "--model_file_path",
        required=True,
        type=str,
        help="Path to ASR model to use",
    )
    parser.add_argument(
        "--preferred_backends",
        type=str,
        nargs="+",
        default=["GpuAcc", "CpuAcc", "CpuRef"],
        help="""List of backends in order of preference for optimizing
        subgraphs, falling back to the next backend in the list on unsupported
        layers. Defaults to [GpuAcc, CpuAcc, CpuRef]""",
    )
    return parser.parse_args()


def main(args, network, input_data):

    current_r_context = ""
    is_first_window = True
    avg_cpu = 0.0
    for input_chunk in input_data:
        # Run inference
        output_result = network.run([input_chunk])

        # Slice and Decode the text, and store the right context
        current_r_context, text = decode_text(is_first_window, labels, output_result)

        is_first_window = False

        display_text(text)
        runtime = datetime.now() - starttime
        print(" " + str(runtime))
        avg_cpu = avg_cpu + psutil.cpu_percent()

    print(current_r_context, flush=True)
    print("Inference End: Avg CPU%=" + str(avg_cpu / len(input_data)))
    return runtime

if __name__ == "__main__":
    args = parse_args()
    # Create the ArmNN inference runner
    network = ArmnnNetworkExecutor(args.model_file_path, args.preferred_backends)
    # Read command line args
    audio_file = args.audio_file_path
    sf_data, samplerate = sf.read(audio_file)
    sf_secs = time_float((len(sf_data) / samplerate))
    # Specify model specific audio data requirements
    audio_capture_params = AudioCaptureParams(dtype=np.float32, overlap=31712, min_samples=47712, sampling_freq=16000,
                                              mono=True)

    buffer = capture_audio(audio_file, audio_capture_params)
    # Extract features and create the preprocessor

    mfcc_params = MFCCParams(sampling_freq=16000, num_fbank_bins=128, mel_lo_freq=0, mel_hi_freq=8000,
                             num_mfcc_feats=13, frame_len=512, use_htk_method=False, n_fft=512)

    wmfcc = Wav2LetterMFCC(mfcc_params)
    preprocessor = W2LAudioPreprocessor(wmfcc, model_input_size=296, stride=160)   
    print("Processing Audio Frames...")
    input_data = []

    for audio_data in buffer:
        # Prepare the input Tensors
        input_data.append(prepare_input_data(audio_data, network.get_data_type(), network.get_input_quantization_scale(0),
                                        network.get_input_quantization_offset(0), preprocessor))
                                        
        
  
    
    starttime = datetime.now()
    runtime = main(args, network, input_data)
    print("Runtime=" + str(runtime))
    print("Realtime=x" + str(sf_secs / runtime))
    starttime = datetime.now()
    runtime = main(args, network, input_data)
    print("Runtime=" + str(runtime))
    print("Realtime=x" + str(sf_secs / runtime))

Both the model and the manner it works isn’t good but this is a perf evaluation which cpu is x50 realtime / gpu x45
There are no Mesa drivers for the Mali G610 and its using a Rockchip blob that I think is underperforming slightly with load of about 70% its a new gen3 vallhall so finger crossed like others it will be added to Mesa and think it will be maybe 30% stronger than the CPU.
Still waiting for dtb updates and driver for the NPU but at 6Tops likely much stronger than cpu/gpu.

SoC is very new and my board was an early adopter discount as the images are very new but so many Vendors has adopted the RK3358/RK3358s that its likely its going to be common with strong support.

I don’t understand. When I looked up the Rock 5B on Amazon, it shows a price about 10-20 times what a RPi 4B costs. Did I look up the wrong thing?

$129.00 Rock5 Model B – ALLNET China still to get stocks in at OkDO.

Its $10 less than what you can a Pi 4 now as no stocks exist anywhere and have not for almost a solid year, hence the thread. Raspberry Pi 4 Computer Model B 4GB RAM Board 1.5GHz 64-bit CPU WiFi Bluetooth 5704174031628 | eBay

Its x5 CPU power, has a GPU, as Videocore is so old its actually a DSP , 6 Tops NPU, Pcie 3.0 x4 M.2 & Pcie 2.0 M.2.
Even if you could get a Pi4 at MRP prices in terms of relative performance and function per $ the RK3588/RK3588s is more cost effective.

Radxa doesn’t sell on Amazon just skalpers authorised dealers are.
https://wiki.radxa.com/Buy

They are still fulfilling early adopter orders and guess still checking this revision but definitely my top tip for a low wattage Arm board if you want something new.
I think the RK3588 has the potential to be like the 1st Pi’s as its a substantial step up and has multiple options for ML and interfaces.

I was lucky as was in time for the early adopter order and got a 8gb for £99 and I am truly massively impressed (As you can tell :slight_smile: ) as Arm boards have jumped to a whole new level but the price increase is much less.

There are also a number of low cost RiscV boards coming on line also that could compete with what I think is Raspberries best product of the ZeroW2 and it would seem totally crazy for Raspberry to give so much preference to there commercial arm when so many alternatives are becoming available.

@stuartiannaylor, there are a few things what you do not consider.

  1. Availability - People are not necessarily looking for some unknown Chinese webshop to order a product from, and generally do not want to wait months just to receive a product, hence the topic…
  2. Price - A RPi4 should not sell for 129 USD, plus customs tax, plus VAT, plus your firstborn…
  3. Support - The RaspberryPi Foundation not just produces the boards, but supports them with code as well for years. And that’s what people need also, a supported product, available for the masses.

We could bring up the Jetson Nano as well. Great board for Ai, but how much it cost, when is it going to be in stock, is it easy to utilise it for HA and for something else?

One of the suggested installations for HA is to use a RaspberryPi.

Supposed to be cheap and easily available. And with a lot of options for casing and implementing additional extensions, etc. (a lot of aftermarket products). But unfortunately not…

The Rock 5B can be the best of the best, but if you cannot find it in a local store for an affordable price with supported hardware and software, then it is like a RaspberryPi sold for 100+++ USD…

I’ve moved on and discovered (like many others) that used thin clients make excellent HA boxes.

  1. Cheaper than the pi4 and the rock pi
  2. Easily available (plenty used corporate stock)
  3. Generally expandable (RAM and storage)
  4. Standard x86 configuration for HA
  5. Fully cased with PSU and often come with an SSD or flash. No need to 3d print or make your own. No dangly bits for USB SSDs.
  6. Fanless (this is a thin client not a NUC)
1 Like

Already said they are just fulfilling pre-orders but will be avail at OKDO

An Rpi does sell as Raspberry for a year now have totally turned their backs on retail outlets, with no stock.
Jetson Nano is not that great as £160.80 https://www.okdo.com/p/nvidia-jetson-nano-4gb-development-kit/ when in stock.
The cheaper 2gb is a real problem as its shared mem and even with the 4gb its GPU is still only 478 Gflops.

I have had Jetson and to match the 8 core of the Rock5b, its MaliG610 GPU & 6 Tops NPU you are talking a Jetson a lot more than £160.80.

I have a big interest in VoiceAi and ‘Smart control’ so the ML performance is really important to me, Jetson are Nvidia and they are Nvidia prices.

What @rockets said as Raspberry kill off themselves as a valid option as thin clients can be really cheap or mini’s or even SFF that I posted.

I still have more compute power on that Rock5b eMMC/NVME and yep I can get $15 cases, its my new toy and been very impressed, its a monster of a new Arm board hence why I am posting.

I also forgot to mention that with the right thin client power consumption is not that much higher than a pi4. Intel core i series NUCs, while providing more CPU power of course, cannot come close to matching this.

I think as 2nd user buys the older thin clients on older Atom chips can be a really good buy price & power consumption wise.
The newer NUC’s think Intel launched an ‘Essential’ line that are really low power, but completely out of the 2nd user thin client price range but $150 for a barebone I think. They speed step really well with really low idle and good load when you need it.

The much newer Arm cores in the Rock5b are closer to a Pi3 that Pi4 with approx 1.5watt idle with a 8nm quad core A76+ quad core A55 but running OPenAi’s Whisper ASR its 5x faster for inference than a Pi4.
Running the Mali, NPU and adding a nvme will add more wattage obviously, but geeking out as Arm in performance are going ever upwards and for new budget tech I think its pretty awesome.
With ArmNN I can use either CpuNN or GpuNN with the Mali and that is part of my interest.

I think with OpenVino its only the newer 12th Gen Nuc’s IrisXe Gpu’s that can be used for ML and out of my price range.
I haven’t done any dev work yet but the RK3588 on the Rock5b is looking perfect for the type of offline automatic speech recognition and text to speech quality I am expecting that on a Pi4 there isn’t a chance.

The thin client is a great tip though as clean out the heat sinks and fan and new thermal grease and some tender love and care, whilst stopping some unnecessary e-waste whilst you are at it.

[edit]
ps if you do want a pi like board that is in stock maybe

2 Likes