Multiple nic's as network switch

The opposite. A PC can’t compete with an enterprise-grade switch. The switch has dedicated ASICs specifically for high performance frame processing. While a PC has to do it all in software.

3 Likes

No. Just… No. Not remotely.

1 Like

Again, no. Just stop. Literally your 1&2 contradict one another.

  1. Most modern operating systems won’t allow you to assign the same ip to multiple interfaces - whether they are connected or not.

  2. Multiple nics on the same subnet would not all see the same packet, save for broadcast traffic. Packets are directed at layer 2 using Mac addresses, which are unique to each interface. If you’d like multiple interfaces to receive traffic for the same ip, this is also possible in most modern operating systems via the use of teaming. If we are talking switch-to-switch communication, this would be done using LACP (or a variation thereof, depending on the switch manufacturer and what they decided to call their implementation). “storm surge error”? Did you just make that up? Lulz

  3. Multiple nics in multiple networks is PERFECTLY fine, and in a pc it’s called “multi-homed”. No, all outbound traffic will not use one interface. Lulz. Google route table.

2 Likes

Sorry for all the fierce reactions :wink: And I don’t want to stir up things but it seems to me that HA/HAOS is behaving a bit oddly. As exx said ‘Most OS’s won’t allow’, this below is allowed by HA, and it is not causing havoc. It only cannot work for access to the camera’s.
I will resort to an external switch. Despite the fact that I cannot believe these nic’s cannot be joint into the one network.
Tnx again.

Lot’s of confusion in this thread. You need to create a virtual network interface called a bridge device, then associate all the physical nics as slaves to the virtual network devices.

The bridge device has the ip address and the packets are forwarded through the physical interfaces.

I don’t use haos but you should be able to create a temporary network with ip it won’t survive a reboot.

Ouch. That’s quite the mess there.

Why are you even attempting to use the same IP for every interface? The cameras are all on 192.168.1.0/24, yes? You are trying to directly communicate with each camera, right? So make enp2s0 192.168.1.7, enp2s4 192.168.1.8, enp2s5 192.168.1.9, and so on. That way each NIC and each camera has a unique IP address.

In addition, however, if one end of the cable is the NIC and the other end is the camera, you will need to use crossover cables unless your NICs support auto-MDI/MDIX. This may be part of the difficulty you are having.

Now that I’m thinking about this a bit more, however, I don’t think this is going to work anyway - not without some manual intervention with your route table.

Since the cameras are all in 192.168.1.0/24, the OS will use whatever NIC it feels like to get there since it has so many NICs in that network. That means you have a 1/x chance of actually communicating with the camera…

If you don’t want to mess with the route table, you’d be better off by moving each camera to it’s own network. One on 192.168.1.0/24, one on 192.168.2.0/24, etc.

1 Like

So make enp2s0 192.168.1.7, enp2s4 192.168.1.8, enp2s5 192.168.1.9, and so on. That way each NIC and each camera has a unique IP address.

I don’t think this is going to work anyway

It will work I have done this before. You just have to be more specific when defining your routes. The more specific route always win.
Lets say I have (2) NICs and one has IP 192.168.1.1 and the other has 192.168.1.2 and my camera has IP 192.168.1.3.

Also lets assume I have the following two routes in my routing table:
Destination: 192.168.1.0 subnet mask 255.255.255.0
Interface: 192.168.1.1
Destination: 192.168.1.3 subnet mask 255.255.255.255
Interface: 192.168.1.2

Anytime I wanted to talk to 1.3 it would use 192.168.1.2 and then use 192.168.1.1 for everything else.

I made a helpful diagram

Routing and switching is simple integer operations, so just compare the IPS values.
The hardware offloading you talk about is also present in CPUs, but they are called cores inside the CPU made especially for math processing and also external chips, like chipsets, where the ones called north and south bridge often has some of the handling of the buses and NIC functions.

The software running on many enterprise grade switches are just a Linux clone and that is also what is running on a hypervisor. I have yet to see anyone recommend that you replace the virtual switch in your hyper isor with a physical one to gain more bandwidth.
Test your internal hypervisor switch and you will see it throughout terabit values and running the hypervisor and VMs at the same time.
Even high-end enterprise-garde switches can be encumbered with too many routing rules and a decent load, where a computer will not.

A computer can sometimes have a bottleneck on the bus though, especially if you try to use a gaming/workstation motherboard, due to too many unneeded devices occupying interrupts. A decent server motherboard will have several free bus interfaces, so several PCIe cards can be used without sharing resources internally.

No, they don’t.
In 1 you would have multiple NICs with same IP connected to same network → duplicate IP error.

In 2 you would have multiple NICs with different IPs connected to the same network.
In a computer this would not mean much, but in a switch those packets received will be switched.

It is correct that switching happens on Mac addresses and only broadcast will be transmitted on all ports, but before you can make the switch to a specific port, then you need an arp whois, which will be a broadcast packet and that will cause a rapidly increasing broadcast storm, because each of the ports in the setup is connected to same network. This is rapidly increasing broadcast storm is what my text book (Cisco) called a storm surge.

We’ve been doing gigabit Ethernet for 20 years; Even the cheapest chinese switching silicon is going to perform at wire speed with low latency.

All with near zero configuration or maintenance overhead, which is probably more important to OP than wringing out every last mpps of switching performance.

I get that OP’s intent was to streamline his networking gear. If it were me, I’d chuck one of the quad port NICs and mount a de-cased 5 port gbe switch in its place.

1 Like

Nice diagram, but the logic in the switch should actually be split up into a cpu, a bridge and a switch.
You think the logic is all made in hardware?
It would be impossible to make changes and correct errors after sale then.
It is still software by far the most of it.

Yes, exactly.

1 Like

A good way to solve the space issue.

Again, as others haven already pointed out above, processing carried out directly by hardware is always faster than doing it in software. That’s why Bitcoin mining rigs always beat PC’s at the same task. The ASICs of the miners execute SHA256 calculations in hardware, not in software. The same occurs between a dedicated switch and switching by PC software.

Now, if you are talking about transfers purely between vswitches, obviously that’s going to be infinetely faster than physical switching. But here we have been debating physical switching, that is, packets going in and out through physical NICs. Inside a hypervisor there is no physical switching at all, so vswitches don’t apply here. All the hypervisor has to do is transfer data between vms executing in memory and orchestrate so that the guest OS’s think that it’s coming through their network stack, whithout actual switching occurring.

1 Like

Sorry, but you are wrong.

Cisco’s IOS and Ubiquiti’s EdgeMax is just Linux system with kernels compiled for this use case.
The setup of many devices is actually often only or mostly in software.
Working with the offloading features on Cisco and uniquiru devices clearly show that it is not all just hardware. Often hardware offloading is not even enabled, because hardware offloading is usually an optimized proces of several steps and those steps can not be interrupted, so there is a backside to the hardware offloading that have to be taken into consideration.

It is actually often possible to install other operating systems on the devices, like OpenWRT or even a standard Debian, which would not be possible with all the logic being in hardware.
Often the hardware offloading require special modules to be used, but sometimes those modules are available and then it is actually possible to see how much or little offloading actually changes.

Bitcoin mining is a floating point operation.
CPUs are mainly focused on integer operations.
Bitcoin rigs are just computers with a graphic card. Graphic cards are specialized in floating point operations.

You are comparing apples and oranges.

You are even more off the mark on this than saying that a switch implemented in a generic PC is faster than a dedicated hardware-based device. But I won’t expand because that’s a different topic. Just know that that statement is factually, technically and theoretically wrong.

I did not say that software is faster than hardware.
I said that the hardware in normal PCs are so fast that it outweighs the hardware offloading features in general network devices, whether it be consumer or enterprise grade.
Of course if you buy a 2000$ PC and 2000$ enterprise grade switch then the switch would most likely win, but 2000$ would also get you an overkill switch for you use case for sure.

Within the context of the OP’s proposition of using a PC with a bunch of NICs as a switch you have been affirmatively arguing that:

And further:

Maybe that’s a misconception that you had/have. But in a nutshell, any decent switch will do, well, switching, faster than a PC.

1 Like