MikroTik Switching With LABS_ Master Switching on MikroTik – All Topics in the MTCSWE Certification Exam Are Covered. [PDF]

  • 0 0 0
  • Gefällt Ihnen dieses papier und der download? Sie können Ihre eigene PDF-Datei in wenigen Minuten kostenlos online veröffentlichen! Anmelden
Datei wird geladen, bitte warten...
Zitiervorschau

MikroTik Switching with LABS Master Switching on MikroTik – All Topics in the MTCSWE Certification exam are covered.

Maher Haddad

Copyright © 2021 Maher Haddad All rights reserved.

DEDICATION This book is dedicated to my wife Claudine and my 2 kids, Melanie and Emile, who supported me in this mission. Also, I would like to dedicate this book to my parents, both passed away, as being the main reason for my successful career in the Information Technology field.

Contents ACKNOWLEDGMENTS 1 Introduction to Switching on MikroTik Switch chip in MikroTik Switches Features on the Switch Chip 2 Maximum Transmission Unit (MTU) MTU increase: LAB: 3 Virtual local area network (VLAN) In which scenarios you may need VLAN’s? What are the terminologies to know for the VLANs implementation? 802.1Q VLAN Overview Types of VLAN on MikroTik Switches LAB: What is Q-in-Q? LAB: Management VLAN LAB: Dynamic Entries in Bridge VLAN Ingress Filtering Tag Stacking MAC Based VLAN LAB:

3 Spanning Tree Protocol (STP) How spanning-tree solves the problem LAB: Rapid Spanning-Tree Protocol (RSTP) LAB: Multiple Spanning Tree Protocol (MSTP) LAB: 4 Link Aggregation LAB: 5 Port Isolation Isolated Switch Groups: LAB: Bridge Horizon LAB: 7 Quality of Service (QOS) LAB: 8 Layer 2 security IGMP Snooping DHCP Snooping LAB: Loop Protect Layer 2 Firewall ARP LAB: Switch Host Table Port Security

LAB: 802.1X Port Based Authentication Securing Switch Access 9 Power over Ethernet (PoE) PoE Priority Settings PoE Monitoring 10 Tools Switch Stats Bridge Filter Stats Bridge VLAN Table Bridge Ports Bridge Hosts Table IP ARP Table Interface Stats and Monitoring Port Mirror Sniffer (and Torch) Logs SNMP 11 SwOS LAB: LAB: Port Isolation LAB: Bonding LAB: VLAN’s on SwOS Final word ABOUT THE AUTHOR

ACKNOWLEDGMENTS Not too long ago, MikroTik has started introducing its switches to the market. After having a long record with MikroTik routers, the demand for MikroTik switches has increased a lot. For this reason, MikroTik made a complete course speaking only about switching. The course name is MikroTik Certified Switching Engineer (MTCSWE). This course has been introduced on the market in the year 2020, so it is a very new course. As switching on MikroTik is a new topic, there are not a lot of resources on the internet to cover all the Switching details, that’s the reason, i have decided to build up a course to speak about MikroTik Switching in detail. So, from 1 side, I cover all switching topics needed to be implemented in a production network and from the other side I make you prepared for the MTCSWE exam. To be able to follow the LABS of this book, you require to have 3 CRS3xx series switches, 2 end devices (such as PCs) and UTP cables. In most of the LABS, I will be using 2 CRS, but for the Spanning-tree LABS we need to use 3 switches. I hope you will enjoy the course, and if you have any question/justification, please feel free to contact me on my email: [email protected]

1 Introduction to Switching on MikroTik MikroTik has 2 types of hardware in the market: Routers and Switches. I saw a lot of confusion in the market for companies that want to buy MikroTik Routers or Switches. Some buy Routers and want to use them as Switches, some others buy Switches and want to use them as Routers. The main reason is always the price. Even though those MikroTik Routers and Switches can both work on Layer2 and Layer3, I highly advise you to buy MikroTik Routers in case you want to use them as Layer3 devices and buy MikroTik switches if you want to use them as Layer2 devices. Below some comparison between the MikroTik Routers and Switches:

As you can see, on the left side there is a Router and on the right side there is a Switch. Both can do Layer2 and Layer3 but look at the throughput for Layer3 on the switch, they are almost 70 times less than the MikroTik CCR router. Of course, the price differs a lot, but also the performance would also differ. If you look at the CCR router, he has 72 cores CPU while the CRS has only a dual-core CPU, which is another big difference as well when it comes to the load of traffic that each device can handle. MikroTik has different Switch models available depending on where you want to deploy them and how much traffic they should carry. A good way to explain it is to see the below picture:

As you can see, in every big network there are 3 different layers: Access, Distribution, and Core. On the Access, the level is where you deploy switching that connects to end devices (we will speak about which MikroTik switch models to use here). On the Distribution level, is where you deploy security and do bonding with the Core level switches, that’s the reason, you require bigger switches than the ones installed in the Access level, and in the core level is where all the traffic is passing that’s why you require very high-speed links and high-performance switches. Let’s see now which are the best switches to deploy in each of the 3 levels:

As you can see, on each level you can buy a MikroTik Switch to handle the

work that it should do. This does not mean that, for example, Core Switches cannot be deployed on distribution or access, this is possible of course, but ideally, and you better leave the big switches to the upper level or your network and the smallest one on the lower level. Also, notice the throughput of each of the Switches, you see that the Core layer Switch can pass traffic at least 7 times more than the access layer switch.

Switch chip in MikroTik Switches In every MikroTik Switch, there is a Switch Chip. The Switch Chip is nothing more than a hardware chip that is installed in the switch from which it can be used so the traffic that is coming from 1 port going to another port does not go to the switch CPU, but only to the switch chip. That’s what we normally call Hardware offloading, which means that it uses the hardware of the switch chip to pass the traffic rather than going to the CPU, which is also called software offload. Now each switch model on MikroTik may have a different switch chip.

Let’s see the diagram below for a MikroTik Router CCR1072-aG-8S+:

As you see, this CCR Router (which is a layer3 router) doesn’t have a switch chip. That’s why it is not recommended to use this router as a layer2 device because all traffic has to be passed to the CPU, which makes the traffic slower and the CPU usage high.

Let’s have a look on another RouterBoard which is CCR1072-1G-8S+

As you can see, this CCR router has dual Switch Chips. This mean that all ports which are connected to the same Switch Chip will be able to do hardware offload, while, if 1 port is in a Switch Chip and another port is on the 2nd Switch Chip and they need to communicate to each other, then the traffic has to go to the CPU.

We have seen 2 examples about the MikroTik Routers, let’s see now about the MikroTik Switch, and let’s start with the Switch that is on the core layer which is CRS326-24S+2Q+RM.

You see that it has 1 Switch Chip which is Marvell 98DX8332 and all 26 ports are connected to it. To mention, each Switch Chip has features and capabilities that may differ from another one (I will show you in a moment after the different switch chips’ features).

Let’s see the Switch that is used in the distribution layer which is CRS328-4C20S-4S+RM.

As you can see, it has a lot of 1 Switch Chips and all ports are connected to it (SFP and Combo Ethernet).

Finally, let’s see the one used for the Access layer that I am using in this course, which is CRS326-24G-2S+RM.

As you see, this switch has only 1 switch chip in which 24 Ethernet ports and 2 SFP ports are connected to it.

Features on the Switch Chip As explained, each Switch Chip has different features that you can use for configuring the Switch. Some of the features are; Port switching Port mirroring VLAN Bonding Quality of Service Port Isolation Access control list Host table

VLAN table For this reason, when you want to buy a MikroTik switch, it is very important to know what is the inside of Switch Chip model and what features it can carry. Based on that, you can check if it has the features that you want to use in your network then you can buy it. Below you can see the features that are available on each switch chip model (picture taken from MikroTik Wiki: https://wiki.mikrotik.com/wiki/Manual:Switch_Chip_Features)

To finish this chapter, MikroTik (as well as myself) recommends using CRS3xx series Switches when you want to deploy Switching in your network using MikroTik products. There are different MikroTik CRS models, which are: Series 1xx Series 2xx Series 3xx Also, there is MikroTik CSS Switch (Cloud Smart Switch) which has SwOS operating system (more to speak about SwOS in this course). Between all those devices, I highly recommend the CRS 3xx series, and this course as well as the MikroTik MTCSWE course will be based on the CRS3xx series. With the CRS3xx series, you can profit from all features that the Switch Chip is providing. While keep using the hardware offload. Below a table summarize

this for you so you have a better idea:

As you can see, on the CRS3xx series you have all features of the switch menu as well as the bridge available, and they use Hardware offload. That’s why MikroTik wants you to use the CRS3xx series more than any other MikroTik switch model. That’s all for this chapter, see you in the next one.

2 Maximum Transmission Unit (MTU) The Maximum Transmission Unit, or what we normally name it MTU, is the size of the frame of layer 2 and the size of the packet of layer 3 which needs to be sent from one device to another. By default, MikroTik makes the layer 2 MTU 1592 to allow you to send VLAN or MPLS with the frame without any problem. If we look at the router, we see the following in the interface:

As you can see, Ether4 interface has 2 MTU’s: MTU=1500 bytes (which is layer 3 MTU) L2 MTU = 1598 bytes (which is layer 2 MTU) Also, you see in this RouterBoard that the L2 MTU can go a maximum of 2028 bytes. Let’s see more what is inside the MTU and let’s understand why the Layer 3 MTU is 1500 while L2 is bigger.

If we look at the picture above, we will see that the data has 1480 bytes and the IP is 20 bytes. That’s why the Layer 3 MTU should always be 1500 Bytes. However, on Layer 2 it is different. On Layer 2 you can increase the MTU when you need to. In some cases, you may have software or applications that require a big MTU on layer 2 to work properly, so it is possible to do it. Or maybe you need to add VLANs or MPLS, then increasing the Layer 2 MTU is required. For example, if you want to send a frame with a VLAN tag, then you require to have the Layer 2 MTU as the following: Data 1480 + IP 20 + VLAN 4 = 1504 bytes. That’s the main reason why MikroTik gives by default a bigger Layer 2 MTU on each of its interfaces, so you can carry VLAN tags or MPLS without any issue. It is very required that in case you are using Switches from multiple vendors to have the same L2 MTU set on all of them. One more important note to mention, do not increase the Layer 3 MTU on the interface taking you to the internet because the MTU on the interface should be 1500 bytes to work. In case you want to increase the Layer 3 MTU for the traffic inside your network only, then this is possible, but not to the internet.

MTU increase: Now we understand the MTU, let’s see the possibility of increasing the L2 MTU differs from one MikroTik Router (or Switch) to another. Here below you can see an illustration demonstrating this:

You can see clearly that each model can have a bigger or smaller L2 MTU. When you increase the L2 MTU to a bigger number, this will be called a Jumbo frame. Jumbo frames are frames which have a bigger MTU than 1500 byte. Normally we do not see this in the ISP environment, but we may see it in the data centers environment to server farms where they require to have a big MTU to allow faster communication on layer 2 networks. Think of something like universities or hospitals which have a lot of servers and want the traffic to flow faster on layer 2, that’s where you can use Jumbo Frames. In this case, all they need to do is to increase the Layer 2 MTU to the size that they want and what the router /switch allows them to increase it to. Always remember that the Jumbo frames are inside your network, once you need to go to the internet, and then you require the MTU to be 1500 bytes.

Let’s see now in a normal production network how the MTU works. Let’s start with the Layer 3 MTU:

You can see that the packet which came inside the router had a layer 3 MTU of 1500 bytes and when it left the router it also had a layer 3 MTU of 1500 bytes. So, you see that the MTU is the same on layer 3 and it is highly required that you don’t change the MTU on layer 3 and keep it 1500 bytes. Let’s check the MTU on Layer 2 now:

As you can see, the frame came in with an L2 MTU of 1500 bytes. The router has added a VLAN tag to the frame then the L2 MTU when leaving the router has increased its size and became 1504 bytes. In case you are using Q-in-Q (will speak about it later in this course) then it will carry more Vlans, which makes the L2 MTU go even higher. That’s why MikroTik is making the default L2 MTU as 1592 bytes thinking that you may not exceed this level when having Q-in-Q as an example. To mention that it is very important when you decide to increase the MTU to know what you are doing otherwise in case this is done incorrectly then you fall into an issue with Fragmentation which means that the router will fragment the big MTU’s packet/frame which would cause a delay in sending the data and cause the router to use more his resources such as his CPU.

LAB: In this LAB, I am going to increase the Layer 3 and Layer 2 MTU and check the

output. Below is our scenario:

As you can see, I have 2 routers connected to each other on their Ether1 interfaces. I am going to increase the MTU on L3 and L2, then see the result. Let’s increase the Layer 3 MTU to 2000 and Layer 2 MTU to 2028 on interface Eth1 or R1.

Let’s change the MTU’s on R2 same as we did on R1:

To be able to check the MTU’s, we require to put IP addresses on the interfaces Eth1 or R1 and R2 so we can ping each other.

Let’s put the IP on the R2 interface Eth1:

Now we ping from R1 to R2 with a big size packet:

Now we go to R2 and let’s capture the traffic using Packer sniffer so we see what the MTU says:

While the ping from R1 to R2 is still running, I am capturing the packets coming to Interface Ether1 on R2. If we click on the packets, we can see the MTU size:

As you see, the MTU size is 2010 as we have set it. This is the end of this chapter; I hope you enjoyed it and see you in the upcoming chapter.

3 Virtual local area network (VLAN) Virtual Local Area Network, or what we normally called it VLAN, is the main topic of the MikroTik MTCSWE course. VLAN is widely used in our networks, and this can be seen not only when using MikroTik Switches but also when using any other vendor switches like Cisco, Ubiquiti, Juniper, and so on. The main question to be asked is: why do we need to use VLAN in our switching network? The answer is just so easy. Imagine you do not use VLAN in our network, then your network is called a flat network which means that in case 1 device does a broadcast, then all other devices will receive it, yeah: Broadcast stops your network. For this reason, we require a better design to segment our network in a better way using VLANs. When you create 2 VLANs, for example, you put 1 device in each VLAN, then even though they are physically connected to the same switch but they do not see each other. That means they will not be able to communicate with each other and that’s the beauty of the VLAN.

In which scenarios you may need VLAN’s? Imagine you have a small company with 3 different departments, then you can create 1 VLAN for each department. In this case, employees working with the same department, and can communicate with each other but not with employees working in other departments.

A company with 3 different department without using VLAN’s.

A company with 3 different department using VLAN’s.

Another example is to create a VLAN for IP phones which is separated from normal PC traffics which are on another VLAN. In this way, all traffic for the IP Phones will have a VLAN tagging which you can use to apply for it QOS and prioritize it in front of other traffic. Another example as well is for ISP’s and WISP’s. They use VLAN’s to separate their customers from each other in a way that for each customer they assign a VLAN. That’s a very common way that most ISPs in the world use when they provide internet service to their customers. What are the terminologies to know for the VLANs implementation? When speaking about VLANs, they are terminologies that we should be aware of. The terminologies are: Ingress: When the frame enters the switch coming from the end device Egress: When the frame exits the switch going to an end device or another switch Tagged: the frame has a VLAN tag or is tagged when forwarded Un-Tagged: The VLAN tag is removed from the frame when forwarded. Access port: Switch port connected to an end device which un-tag frames are leaving from the switch to the end device and tag them on the ingress. Trunk port: Normally used between 2 connected switches to allow more than 1 VLAN traffic to go from one switch to another. Let me show you with an example:

As you can see, SW1 has a Trunk port on Ether1 so it can receive the frames from different VLANs from SW2. That’s why it is tagged. Same Ether1 of SW2. You can see that Ether2 and Ether3 on SW2 are connected to end devices, that’s why they are access ports. Once the frame comes inside Ether2, this is called ingress, then SW2 will give it a tag of VLAN20 and will send it from its trunk port Ether1 to SW1. Now if the frame is leaving out from Ether2, that’s the egress, the Switch will remove the Tag from it and forward it to the PC. Like if we look for Ether3 as well. So now we understand how the VLAN works and what are the terminologies that we know to know, let’s see where the Tag appeared in the frame.

802.1Q VLAN Overview The tag is nothing more than something added to the Ethernet frame saying that it belongs to a VLAN. You have to think of it like a mark or a color that the switch will understand that this frame belongs to a particular VLAN.

On top, it shows the normal Ethernet frame. Below you see when the frame has added the 802.1Q header which has the VLAN ID inside of it – that’s the tagging. You will see that the frame will remain the same but only a header has been inserted between the Source Mac address and the Type. This header contains a lot of information, and one of them is the VLAN ID. Let’s dig inside more and see what this 802.1Q header contains:

As you can see, the 801.1Q header (which is an open standard protocol) consists of many things. Tag protocol identifier (TPID) A 16-bit field set to a value of 0x8100 Identifies the frame as an IEEE 802.1Q-tagged frame. Tag control information (TCI) Priority code point (PCP) 3-bit field which refers to the IEEE 802.1p class of service Drop eligible indicator (DEI) 1-bit field. VLAN identifier (VID) 12-bit field specifying the VLAN to which the frame belongs. (212=4096) VLAN IDs should not be used in generic VLAN setups: 0, 1, 4095

Let’s explain this a bit. The TPID mentions whether you are using normal 802.1Q encapsulation or you are using Q-in-Q (will speak about Q-in-Q in this chapter). When you say the TPID value of 0x8100, then you are using normal 802.1Q encapsulation. That’s something you see inside the bridge in the MikroTik Switch configuration:

Then you have the Tag control information (TCI) which has inside of it: Priority code point (PCP) Drop eligible indicator (DEI) VLAN identifier (VID) The PCP is used for the QOS on VLANs or what we normally call Class of Service (COS). The PCP consists of 3 bits, which means the priority which we can use starts from 0 to 7. 0 is the default one, while 1 is the highest priority and 7 is the lowest one. For example, if you want to prioritize the ICMP traffic, you can use COS = 1. This can be done using the mangle rule or the bridge filter rules (I recommend using the bridge filter rule).

DEI consists of 1 bit and (formerly CFI[b]) may be used separately or in conjunction with PCP to indicate frames eligible to be dropped in the presence of congestion. VID consists of 12 bits and that’s where the tagging happens. As being 12 bits, that means in each switch you can use up to 4096 VLAN. Why that, because 212 = 4096. The VLANS will start from 0, 1, until 4095. It is highly required to not use VLAN 0, VLAN 1 and VLAN 4095. Excellent!!!! So now we have more information about the VLAN, let’s see the different types of VLANs we have on the MikroTik Switches.

Types of VLAN on MikroTik Switches The 1st type of VLAN that we have available on MikroTik Switches is PortBased VLAN. As its name mentions, the VLAN tag is added to the same based on the port on the switch using what we can “pvid”. This is exactly what we have seen up to now, when the frame comes to a port from an end device it will be tagged and when it leaves the port to an end device, it will be un-tagged. Port-Based VLAN is one of the most common VLAN types used nowadays. To configure Port-Based VLAN, it differs from one MikroTik Switch model to another. In this course, I will be focusing only on configuring MikroTik CRS3xx Switches. In case you want to see how to configure VLAN’s on other MikroTik models such as CRS1xx/2xx or CSS, I refer you to my online course in which I speak about this. The course can be accessed on the following URL for a cost of only $12 USD for lifetime access: https://mynetworktraining.com/p/vlanon-mikrotik-with-labs-routeros-swos MikroTik provides us with different ways to configure VLANs. However, the best way is to use the Bridge VLAN filter where all traffic will be hardwareoffloaded which means that the traffic will not go to the CPU. However, to mention here, if you want that all your traffic to be hardware-offloaded, you should use only 1 bridge interface. In case you create 2 or more bridges, then interfaces that belong to the 2nd bridge will not be hardware offload but software ones, which means that the traffic will have to go to the CPU. Also when using bridge VLAN filtering, we will profit from all other features that

are available on the switch chip such as IGMP snooping, DHCP snooping, RSTP, MSTP, etc…. So now we understand what Port-Based VLAN is, let’s do a LAB so we can have more information on how to configure it on a CRS3xx series switch.

LAB:

I have 2 CRS3xx series switches connected to each other on Ether1. On SW1 and SW2, Ether1 should be the trunk port and Ether2 and Ether3 should be the Access port in which Ether2 will be on VLAN20 and Ether3 on VLAN30. Then I am going to create 2 DHCP servers on R1 on Ether2 and Ether3 and will see if Ether2 and Ether3 of R2 will receive IPs from the DHCP servers created on R1. We will start by configuring the ports on SW1 and SW2 as Trunk and Access ports. On both SW1, will create a bridge interface and put inside it Ether1, Ether2 and Ether3 (be sure that hardware offload is checked on all added interfaces)

Adding ports to the bridge interface on SW1

Adding ports to the bridge interface on SW2

Let’s start configuring VLAN’s on SW1 then we copy the same configuration to SW2. We go to the bridge then to port Ether2 and we give it a pvid of 20

We do the same on port Ether3 but we give it a pvid of 30

Now we need to tell the SW1 which port is Trunk and which port is accessed and on which VLAN. Remember Ether1 should be the trunk, and Ether2 should be accessed on VLAN 20 and Ether3 should be accessed on VLAN 30.

The last step is to enable VLAN filtering on the bridge. Be careful when you enable it to be sure that all your configuration is correctly done because you may lose connectivity to the Switch. Also, it is required that you always have a backup port on the switch which you can use to access it in case you lose connectivity to it from the VLAN filtering

We are done on SW1. As the configuration is identical on SW1 and SW2, I will copy and paste the configure from SW1 to SW2 (you can re-do the same steps on SW2 as to what was done on SW1) Now I have done the configuration for both SW1 and SW2. Let’s configure the DHCP server on R1 on interfaces Ether2 and Ether3, then see if R2 will receive the IP addresses correctly from it. Let’s put an IP address on Ether2 of 10.20.20.1/24 and on Ether3 10.30.30.1/24

Now let’s configure the DHCP server on Ether2

Will do the same on Ether3. The result will be:

Excellent!!!! Let’s see now if R2 will receive an IP address on Ether2 from the range of 10.20.20.x/24. To do this, we will need to enable the DHCP client on R2. Let’s do that:

Guess what? R2 has received an IP address on its interface Ether2 from the range that it should receive from which is 10.20.20.x/24. As you can see in the picture below:

What about Ether3? Let’s enable the DHCP client on it and see if it will receive an IP from the DHCP server on R1 from the range of 10.30.30.x/24

And the result? Indeed, it has received an IP from the DHCP server on range 10.30.30.x/24.

Excellent!!! So this is how you can configure Port-Based VLAN on CRS3xx switches. As the Lab is still connected, I would like to explain and do a LAB about Q-in-Q using the LAB that we have currently used for the Port-Based VLAN. Then after that, we can speak about the MAC-based VLAN.

What is Q-in-Q? Q-in-Q which is also referred to as 802.1ad, is to send a VLAN tag inside another VLAN tag. But why we do need to do that? Well, there are many reasons for that.The 1st reason is that in case we need to use more than 4096 VLAN in our network, then this is not possible. When using Q-in-Q you can have the possibility of increasing the number of VLANs. In most cases you can use Q-in-Q is in an ISP network from which a company has a VLAN from its provider and wants to distribute internet service to different customers putting each one of them on a VLAN. Then in this case you require to use Q-in-Q. Or maybe you have a reseller for your ISP and this reseller is assigned to a VLAN, and he wants to put all his customers on a different VLAN each one, then you can use Q-in-Q (the list can go longer, for example). When we used normal VLAN, we set the Ethertype to be 0x8100. For Q-in-Q, the Ethertype should be 0x88A8 which is also referred to as SVID. It’s very important to remember that each time you use Q-in-Q, then MTU will increase, so be sure that you have an L2MTU that is big enough to support the Q-in-Q frame. The last thing that I want to say here is why we call it Q-in-Q. The answer is very easy. We are doing 802.1Q encapsulation inside and 801.1Q encapsulation. Which means a VLAN inside a VLAN. That’s why it is called Qin-Q which refers to the 801.1Q. Would that means that if we intercept the Q-in-Q traffic, we will see 2 VLAN tags? The answer is yes, we will. Enough speaking about Q-in-Q, let’s apply the LAB now.

LAB:

The LAB is still the same as we have left it in the previous one. What I need to do is to create on R1 VLAN22 on Ether2 and VLAN33 on Ether3 to be have the DHCP server on them. In this case, I have VLAN’s on R1 different than what is on Ether2 and Ether3 of SW1. We will see if the VLAN22 and VLAN 33 will be able to reach the DHCP client on R2 when using Q-in-Q. Let’s go to R1, create VLAN 22 and VLAN 33 then move the DHCP server to them:

Excellent!!! We have now 2 VLANs created on each Ether2 and Ether3 of R1. Remember, R1 is connected to SW1 on port Ether2 which is on VLAN20, and on port VLAN30 which is on port Ether3. That means the VLANs on the switch are different than the ones on R1. Now we have the VLAN’s created on R1, let’s move the IP addresses from Ether2 to VLAN22 and from Ether3 to VLAN33 because I need to run the DHCP server on those VLAN’s.

Last thing is to move the DHCP server from Ether2 to VLAN22 and from Ether3 to VLAN33:

We are done on R1. Now we need to say to SW1 and SW2 that we are going to use Q-in-Q. We do not want to do anything more than changing the Ethertype to 0x88A8. I will show you how to do that on SW1 and on SW2:

Very good. So now the 2 MikroTik Switches know that Q-in-Q will be passing through them. Let’s have a look now on R2, did it receive an IP address on its Ether3 interface?

You can see that it hasn’t – hmmmmmm. Why is that? Well, think of it. Q-inQ has to send a VLAN inside a VLAN tag, but on R2 we didn’t create VLAN’s, did we? So, what we need to do is to create a VLAN33 on Ether3, then apply DHCP client on VLAN33 to see if it will receive an IP from the range of 10.30.30.x/24, after that, we try for VLAN22. Let’s do that.

So the VLAN 33 has been created under Ether3, let’s now enable DHCP client on it.

As you can see, VLAN 33 has received an IP on R2. Same will do VLAN22. That’s all about Q-in-Q.

Management VLAN Another important topic is the Management VLAN. You may have noticed that once you put a switch port on a VLAN then the PC connected to it will not be able to log into the inbox to that switch. What if we have 80 MikroTik CRS3xx switches in our network, how will I be able to login to them to configure them? The best way is to use a Management VLAN. The concept is very simple. You create a Management VLAN on each Switch

under the bridge interface, you give an IP address to it then you advertise it on the VLAN as a trunk port on all the switch ports. From the PC side, you need to use the same VLAN on your network card (some old Network Interface Cards do not allow to add a VLAN ID on it), you put an IP address from the same range as you have put on the Management VLAN, then you will be able to gain access to the Switch(es) again. I think you are lost now, right? No problem. Let’s do a lab and with this lab you will understand it better.

LAB:

As you can see, I have 2 MikroTik CRS3xx switches connected to each other on the Ether1 interfaces. They have trunk ports. Then I have R1 (which is an end device) connected from its interface Ether2 to the Ether2 of SW2 which is on VLAN20. This configuration is already done, and I will not repeat it because I have already explained how you can do trunk and access ports. Now the idea is that R1 will not see any more SW2 and SW1 be able to configure them (remember that R1 is like a PC). To be sure of that, I will put my PC in place of R1, and you will see SW1 will not be shown. In case I try to connect to SW2 then it won’t work.

Imagine you have a lot of switches in your network, and you have a problem with one of them that you should connect to it to solve the problem, but you cannot. This is not the best thing I would say, do you agree? So, what options do we have here? Well, we need to create a Management VLAN on each of the switches which we will use when we want to connect to the switch.

Let’s start creating the Management VLAN on SW1 and SW2 and give them an IP from the range of 192.168.1.x/24 Let’s start with SW1:

As you can see, I create a VLAN for management under the interface bridge1, I used the VLAN ID 99 Now we need to give it an IP address:

Let’s do the same on SW2, creating a VLAN for management and give it an IP from the same range.

Excellent!!! The next step is to allow the VLAN99 to pass on the trunk port between SW1 and SW2 because my idea is that from my PC I need to be able to reach SW1. Let’s do that on SW1.

We needed to add Eth1 and Bridge1 to the tagged ports because we have initially created the VLAN 99 under the bridge1 interface and the Switch sees this interface as a normal interface.

We have to do the same on SW2 but in addition, I should say that interface Ether2 which is connected to the end device is also a trunk for VLAN 99

This way, SW1, and SW2 should be able to ping each other on the VLAN management IP. Let’s try from SW2 to ping the IP of the Management VLAN of SW1 which is 192.168.1.1

Excellent, they can ping each other. Now it is time to see if we can from the end device reach SW2 and SW1 on their management VLANs. For this, you need to create a VLAN99 on your PC. Some NICs allow you to add a VLAN ID to your PC. If this applies to you, you just need to go to the advanced option on your NIC and add VLAN 99 on it, then put an IP address to it from the range of 192.168.1.x/24, then try to connect on Winbox to SW1 and SW2 and you will see it will be successful. Unfortunately, my NIC in my PC doesn’t allow adding a VLAN ID, that’s why I have put a router so I can create the VLAN ID on the router and assume that this router is just an end device. Then I will see from the R1 if he can see SW1

and SW2 on their Management IP addresses. Let’s first create a VLAN 99 on R1 under the Ether2 interface and give it an IP address of 192.168.1.3/24

Let’s check from R1 the IP neighbors to see whether R1 can see the 2 switches on their management IP address:

Look!!!! He can see them on the VLAN99 interface. Excellent. Let’s try to ping both IP’s from R1 so we are sure we have reachability:

As you can see, the Router can reach both IP addresses, which means if you have a PC machine in place of the Router then you would be able to connect to the 2 Switches using winbox on their IP addresses without any issue.

That’s the end of the management VLAN on the CRS3xx series Switches, hope you liked it.

Dynamic Entries in Bridge VLAN We are still having the same Lab scenario of the management VLAN. If we look at SW1 and SW2, we see on the VLAN Bridge and a dynamic entry has been created having the interfaces bridge1 and Ether1 as untagged on VLAN 1.

Why is that? Because when we have added the interfaces to the bridge, both bridge1 and Ether1 had by default VLAN ID =1. This means what exactly? This means that if I connect my PC to the interface Ether1 on SW1 or SW2 then I will be able to access the Switch completely from Winbox. The reason why is because Ether1 and Bridge1 interfaces are both on VLAN1, which is the native VLAN. Did you get my point? Maybe it’s a good practice to connect my PC to Ether1 of SW1 and then to SW2. Let’s start by connecting my PC to Ether1 of SW1

As you can see, directly SW1 has been shown on the Winbox because I am

connecting to Ether1 which has VLAN ID or 1 the same as on the bridge1 interface, then I could see the Switch on Winbox and able to login into it and configure it. Let me move my PC cable to SW2 interface Ether1 and see if I will have the same result.

As you can see, I can also see SW1 when connecting to Ether1. Wow…. That’s not something we have like to do right? I know that some of you may be saying now that Ether1 is a trunk port, and normally no one physically is able to reach it. Well, that’s somehow true, but still, it is a point of weakness to keep it this way. So, what is the solution here? Well, the solution is too easy: change the PVID on Ether1 or on bridge1 to another one. That means that Ether1 and bridge1 interfaces should not have the same PVID. Why do we that? Because once the 2 interfaces aren’t on the same PVID, then in case a PC is connected to Ether1 he won’t be able to have full access to the Switch. Got the idea? Let me show you by changing PVID on the bridge1 interface on SW2 to 122 and see if my PC will still be able to reach the switch when connecting to Ether1.

If we look at the VLAN table on SW2, we will see that bridge1 is on VLAN 122 and Ether1 is on VLAN1, which means they are on 2 different VLANs, and this has been dynamically shown in the entries.

I will put the cable again from my PC to SW2 interface Ether1 and see if I am still able to reach it from Winbox

As you can see, no entry is able to connect to SW2. Even if you try to put the MAC address of SW2 on Winbox and connect to it, you won’t be able to do it.

Ingress Filtering Another topic that I want to show you on VLAN is ingress filtering. This you can see in 2 places. The 1st place you see is inside the port which is inside the bridge as following:

Also, you see the bridge interface itself on the VLAN tab as you can see in the picture below:

Let’s see, first, its function on the physical port (so the 1st picture). The concept is very easy. If we are setting a security measure in case and access port sends us a frame with a tag then we don’t accept it at the port. In fact, an end device should not send us a frame with a tag, right? You can also assign it to a trunk port in which if a frame comes to the trunk port having a VLAN tag different than what it is on the port itself, then it is dropped. This is the whole function of the ingress-filtering on the port-based. How to apply it? Have a look here:

So now any traffic coming on the Ether2 interface from an end device with a VLAN tag on, it will be rejected. Let’s see now on the bridge level. On that level, once you enable the Ingress Filtering, then once the frame is received with a VLAN that doesn’t exist in the VLAN table then it will be dropped before the bridge sends it out to the egress. Where is the VLAN table? Here it is:

Now, how can we enable ingress filtering on the bridge? It is just straight forward:

To mention that ingress filtering on both cases does not work if you don’t have VLAN Filtering enabled.

Tag Stacking Another option that you see at the port level is Tag Stacking. This option can be enabled or disabled. If you enable it, then in case a frames comes to the ingress of the port with a VLAN ID different than the VLAN ID which is set on that port, then the VLAN ID which came in will be removed from the VLAN ID which is set in that port. How to set it? It is very easy; you just need to check the Tag Stacking inside the port itself as you can see here:

MAC Based VLAN All that we have seen up to now has been based on the port-based VLAN,

which is a very common one. But also, on the CRS3xx series, you can use MAC-Based VLAN. What is exactly the MAC Based VLAN? The concept is very easy; each network device has a MAC address on its network interface card which is unique. What we can do, we can configure the switch in a way that each MAC address will belong to a particular VLAN. That means in case my PC is on VLAN 10, if I connect my PC to anywhere in the network then I will also remain on VLAN 10. That makes things much easier for many network administrators. Enough from theory, let’s see how we can apply this on a LAB. I have already wiped out all configurations on my switches.

LAB:

I have the following LAB. I have R1 connected to SW1 on the interface Ether1, and on the other hand, I have connected my PC to the interface Ether2 of SW1. I am going to create a VLAN20 interface on R1, where I will enable the DCHP server on it, then on SW1, I will have to make Ether1 as a Trunk port and assign the MAC address of the PC to be on VLAN 20. Then I need to see if the PC will get an IP address from the DHCP server.

Let’s start the work on R1. I will create the VLAN under Ether1 and assign it an IP address:

Now I will create the DCHP Server under the interface VLAN20 on R1

Now R1 is ready. We have created the VLAN 20, we assigned it an IP address and a DHCP server.

Let’s go to SW1 and configure it now for the MAC-Based VLAN. First, we need to create the bridge interface, then add Ether1 and Ether2 as well as Ether3 to the bridge (I need Ether3 for testing).

Be sure that Hardware Offload is checked when adding the ports to the bridge. Now you need to add the VLAN’s which we only have one for this LAB (VLAN 20). You need to make Ether1 as a trunk port and access the interface Ether2 and Ether3. Also, I will create VLAN 30 and make it a trunk port on Ether1 and access on Ether2 and Ether3. Then I will create a rule on the Switch saying that any traffic coming from the MAC address from my PC will go to VLAN 20 and not to VLAN 30. This way we should receive an IP address from the DHCP server that we have set on VLAN 20 of R1. Let’s do that.

Excellent. So now on Ether2 and Ether3 of SW, there is VLAN 20 and VLAN 30. How does the switch now need to understand that in case my PC is connected to Ether2 or Ether3, he should put it on VLAN 20 and not VLAN 30 to be able to receive an IP from the DHCP server? That’s what should be done

with the MAC address of my PC. Let’s first get the MAC to address on my PC interface card.

Okay, this is my NIC MAC address on my PC. Now we need to go to SW1 and on the Switch Tab we should create 2 rules to say that in case my PC is connected to Ether2 or Ether3, then it should go to VLAN 20. Let me show you where the Switch Tab on Winbox is.

Let’s create the rules from the Switch Tab

So, in this rule, I am saying that if the switch sees traffic coming on the Ether2 or Ether3 interface from the MAC address of the PC, then put it on VLAN 20. That’s it. Now do not forget to enable VLAN filtering on the bridge interface, so the VLAN process will work on the Switch

I will connect my PC to the interface Ether2 and check if it will receive an IP from the DHCP server on R1.

Indeed, it has received an IP from the DHCP Server, so that means this PC is now on VLAN 20. Excellent!!!! Let me try to put the cable to Ether3 to see if it will also receive an IP address from the DHCP server.

As you can see, I have released the IP and renew it after I connect my PC to Ether3, and indeed it is receiving an IP address from R1 DHCP server which is on VLAN 20. That’s amazing. This is how you can use MAC Based VLAN on MikroTik CRS3xx Switches.

3 Spanning Tree Protocol (STP) In this chapter, I am going to speak about Spanning Tree Protocol, which we normally refer to it as STP. The STP protocol has 1 function on our layer 2 networks: it is to avoid having loops on layer 2. What’s the mean of the loop exactly and how it happens? Let me show you with example:

In this scenario, we have 2 switches connected to each other with 1 cable. We do have one point of failure because in case the cable or any of the connected ports is damaged, then our network goes down. To get rid of the single point of failure, we will have to put another cable as following:

When adding the 2nd cable, we have redundancy. That’s a good thing. But the bad thing is that with redundancy we will end up having loops. Let me show you how the loop can happen in this scenario. 1- If PC1 sends an ARP request to get the MAC address of PC2, then PC1 will send a broadcast frame. 2- SW1 will receive the broadcast and will send it to all its ports except the port from which the ARP request came. So, he will send it from Ether1 and Ether2. 3- SW2 will receive the broadcast frame from Ehter1 and from Ether2. 4- In its turn, SW2 will send the broadcast from all ports except the port that the frame came in. That means it will send the broadcast that came from Ether1 to Ether2 and the PC2, and the frame that came from Ether2 to Ether1 and PC2. 5- SW1 will receive the broadcast frames again from Ether1 and Ether2 and will do the same as SW2 did. This will keep moving like this for an indefinite time. That means nothing will stop the loop and when it happens, then your network is down. I have

received a lot of questions from my students saying that TTL (Time to Live) is the time that the Ethernet frame has before it dies, which solves the problem. My answer is always that TTL is for Layer3 packets but not for Layer2 frames. On Layer2 there is no TTL, which means the loop will never end. In this case, the only solution we have is to remove one of the cables because the 2 switches then the loop is over, but we do not have redundancy anymore, or we apply STP while keeping both cables connected.

How spanning-tree solves the problem With STP, we will have a loop-free topology on Layer2. To understand STP better, I will go-to examples. Let’s say we have the following scenario:

As you can see, we have 3 switches connected to each other, and we do have redundancy here. Why do we have redundancy? If you take SW2, it can reach SW3 directly from Ether3 but it can also reach it from SW1. That’s what redundancy is. Again remember, when we have redundancy, we have loops. So how to avoid it? We need to run STP. As you can see in the picture, each switch has a priority and a MAC address. By default, all priorities on the CRS3xx switches are the same. MikroTik uses the priority on Hexadecimal which is 0x8000 (equal to 32768

on decimal). If STP is enabled, the switches will send to each other a special frame called BPDU (Bridge Protocol Data Unit). This special frame has 2 informations that STP requires: MAC Address Priority The Mac address and the priority together make what we called the Bridge ID.

Let’s see how the BPDU’s will be sent:

Why are the BPDUs sent from one switch to another? Because they need the bridge ID to elect the switch which is going to be the root bridge. The switch which has the lower bridge ID (Priority + MAC) will end up being the root bridge. Look at the picture above, which switch has the lowest bridge ID? You see that all have the same priority, then we look at the MAC Address. If we see that SW1 has the lowest MAC address between all, then SW1 will end up being the root bridge.

As you can see, SW1 has become the root bridge and directly all the ports of the root bridge move to the designated state, which means they can forward the traffic. SW2 and SW3 became the Non-Root bridges. Those non-root bridges will check what is the shortest path for them to reach the root bridge.

On SW2, the shortest path to reach SW1 is to go from the Ether1 interface and for SW3 the shortest path to reach SW1 is to go from Ether2. Then those ports are called Root-ports. Of course, you can change this by changing the cost of the interface, which is something we will see later. But let’s assume that we have the default cost which is 10 on all interfaces that means our scenario will become like this:

One root port, the traffic will be forward also. So that means until now we haven’t solved the loop issue because all ports are still forwarding whether they are designated ports or root ports.

We will have to look at the last segment which is between SW2 and SW3. One of the ports should go to a blocking state (also called Alternate) which means that it doesn’t pass traffic. Again, both switches (SW2 and SW3) will compare their Bridge ID and the one which has the lower Bridge ID will go to Designated and the other will go to Alternate. We see that SW2 has the lowest bridge ID, then that’s how the scenario will end up:

Now, why did Ether3 of SW2 become a Designated port and not a RP for example? Well, because you require 1 Designated port on each segment. You see that on each of the segments there is 1 Designated port. Now we do have a loop-free topology because Ether3 of SW3 is not allowing traffic to

pass. But what if any of the operational links go down? In this case, the Alternate port will move to a forward state, and that’s exactly what the main job of the redundancy network is. Let’s say that the network is working now all of sudden, port Ether1 of SW1 goes down, then the alternate port has to move to a forward state. How long does this take for the failover to happen? Let me show you:

As you can see, it may take up to 50 seconds for the failover to happen. I know this is a long time to wait to have our network operational again. Remember, STP is a very old protocol that was made in the year 1983, so in that time 50 seconds what not a big issue for the failover to happen. As for now, we can’t wait that long and that’s why we have Rapid-STP which is an enhanced STP protocol and which works much faster (we will start to learn about RSTP in this course). Back to the scenario, the port on the blocking state will wait up to 20 seconds to move to the listening state. When in listening state, the port will start receiving and sending BPDU’s but do not learn MAC address nor transmit data. The listening state takes up to 15 seconds. One on learning state, the port will still send and receive BPDU’s and will learn MAC addresses but still does not transmit data. This state takes up to 15 seconds. Once on the forwarding state, the port will start transmitting data. This is the whole process of the transition of the port from Alternate to forwarding. Enough of theory, let’s apply the first LAB about STP.

LAB:

As you can see, this is my scenario. I have a redundant switch network and I want to use the classic STP to have a loop-free network. I have already put the interfaces inside the bridge on all Switches. Remember, you should be on a version over 6.41 so the Switch chip Hardware offload would work. Also, when adding the interfaces to the bridge, be sure that you check the hardware offload. By default on MikroTik, the Switches are using RSTP, so we need to change that to STP. You should go to each switch and from the bridge, you go to the STP tab and enable STP.

If you look carefully, the priority by default is 8000 on hexadecimal. Now STP is enabled on all 3 switches. Excellent. Let’s see which switch has been elected as a Root Bridge and check its port states. If we compare the Bridge IDs of the 3 switches, we see that SW1 has the lowest bridge ID, thus all its ports should be designated ports. Let’s check on SW1 itself.

Let’s see is SW1 ports are designated ones:

Excellent!!!! Both Ether1 and Ether2 are designated ports. Also, look at the letter “H” behind each interface which says that the interfaces are hardware offloaded. Now we know which Switch is the Root Bridge, then the interfaces of the none-root bridge (SW2 and SW3) which are facing the SW1 Switch should be root ports because that’s the shortest path to reach the root bridge. Let’s check the first SW2 and see if Ether2 is a Root Port.

Indeed, Ether1 of SW2 is a Root port. What about Ether2 of SW3, is it also a root port? Let’s have a look:

Indeed, it is also a root port. Very good!!!! Now we still have the last segment between SW2 and SW3. One of the ports should be a Designated port and another should be an Alternate port. Remember, the 2 switches will compare their bridge ID’s and the one which has the lowest bridge ID will have the designated port. If both priorities are the same, then we have to compare the MAC address: SW2: 08:55:31:71:A9:E6 SW3: 74:4D:28:BC:DE:2B We see that SW2 has a lower MAC address than SW3, then the port Ether3 of SW2 should be a Designated port while the port Ether3 of SW3 should be alternate. Let’s check that, we start with SW2:

Indeed, Ether3 of SW3 is a designated port. By the way, Ether23 is a port where PC2 is connected to the switch. Let’s check now on SW3 what the port state of Ether3 is:

As you can see, it is an alternate. Ether5 is the port where PC1 is connected to. Excellent!!! So the logic that we have followed in the theory is 100% working here. Now I need to do an extended ping from PC1 to PC2. The ping will go from PC1 to SW2, then to SW1 then to SW3, and will reach PC2 (same in the way back). While the ping is open, I am going to disconnect the port Ether2 of SW3 and see how long it takes for the STP failover to happen. Let’s start with the extended ping from PC1 to PC2.

As you can see, the ping is working. Now I will disable the interface Ether2 on SW2 and see what will happen to the ping.

You see, once I have disabled the Ether2 on SW3, I had 6 requests timed out, and then the link starts working again because STP has made me the failover. Of course, 6 requests timed out is a big time to wait. Is there a better way that this goes faster? The answer is yes by using the Rapid Spanning-Tree Protocol (RSTP) which I am going to show it to you right away.

Rapid Spanning-Tree Protocol (RSTP) When using RSTP, the transition of the port from Alternate to forwarding will go much faster. That’s why this protocol is called Rapid STP. There are many differences between the classic STP and the RSTP. However, in the case where in the same network, you are using in some MikroTik Switch the STP protocol and in some others, you are using the RSTP, then the STP will be the one used on all the Switches. If we compare STP and RSTP, you will see many differences. The 1st is as

follows:

You see that on the RSTP, the Blocking and listening become Discarding on the RSTP. Another difference is that there is no more timer for the port to move from the blocking state to the forwarding as we have seen on the STP. There is another mechanism called the negotiation process (which is out of the scope of this course) in which makes the transition go much faster. If you like to learn about it, you can enroll in my online course “MikroTik Switching –

Spanning Tree Protocol” where I speak in detail about it. You can enroll in this course at the following URL: https://mynetworktraining.com/p/mikrotikswitching-spanning-tree-protocol Another difference between STP and RSTP is that in the RSTP the BPDU’s are sent every 2 seconds as keep-alive, and in case he doesn’t hear from the neighbors after 6 seconds then he will believe that the neighbor is down and will remove from its MAC address table all MAC addresses learned from the neighbor. Now we have a basic idea about RSTP, let’s apply it on a LAB to see what is going to happen with the failover.

LAB:

I am still using the same LAB scenario that I have used on the STP LAB. Now everything is back to what it was before, and SW1 is the root bridge. We need to change the protocol from STP to RSTP on all 3 Switches (by default, RSTP is enabled on the MikroTik CRS switches).

Now RSTP has been enabled on all switches.

Let’s see SW1 if it is the root bridge and the state of its interfaces.

Then SW1 is the root bridge and all its interfaces are designated ports. Let’s see, how the states of the interfaces on SW2

Very good, as I was expected. Last one, we check SW3 interfaces states to see if Ether3 is on Alternate state.

Indeed, Ether3 is on Alternate state. Now, with RSTP we can use Edge Port. The Edge port should be enabled only on ports that are connected to end devices and not to other switches. What the Edge port will do, it will make the port go to the forwarding, once you connect the end device to it. In our scenario, we have to do it on Ether23 on SW2 where PC2 is connected and on Ether5 of SW3 where PC1 is connected. I start with SW2.

I will also do it on Ether5 of SW3.

While enabling the Edge port, you may have noticed Point to Point. This has a function for the ports which are connected from a switch to another switch. When you enable the Point to Point on a Switch, then you say to the switch that you are connected to another switch port that is working on a full-duplex mode. By default, it is on auto and most likely it is enabled, but it is a good practice to enable it to ports that are connected to other switches.

The last thing that I want to do is enable BPDU Guard on Edge ports. As the edge ports are ports connected to end devices, then I should not receive BPDU’s from them, correct? So once I enable BPDU guard, in the case on that interface I receive a BPDU then the interface will go directly down. A lot of hackers use some emulation software on their PCs and start sending BPDU’s with a low priority to become the root bridge, then all traffic of the network will pass via their PC, and they can intercept everything happening in the network. That’s something we need to avoid. So let me enable BPDU guard on Ether23 of SW2 and Ether5 of SW3

Excellent!!!! Now I am ready for the test. I will issue again an extended ping from PC1 to PC2, then I will disable the port Ether2 on SW3 and will see how long it takes for the network to be operational again.

Let’s open the ping now:

Now I will disable Ether2 interface on SW3

If I look to the ping, I don’t even have a single request timed out.

Wow!!!! That’s amazing how RSTP works and solves our issue with a late transition when using the classic STP. That’s the reason why MikroTik uses RSTP as the default one on their Switches.

Multiple Spanning Tree Protocol (MSTP) Now we understand how STP and RSTP works, it is time to check the last time of STP protocol that is available in MikroTik CRS3xx Switches which are Multiple Spanning Tree Protocol (MSTP). We use MSTP when we are using VLANs in our network. Let’s say that you in your network VLANs from 100 to 300. If you use RSTP, then all VLANs will be having 1 Spanning Tree instance with one Root Bridge. That means that all VLANs will not use the segment where the alternate port is. So we will end up like this:

As you can see, the segment between SW2 and SW3 will not be used by all VLANs. So imagine a total of 200 VLANs flowing on the same segments while we have 1 segment which is unused. Is there a solution for that? Yes, there is, we can use MSTP. With MSTP, you can create 2 instances, and you divide the VLANs based on the instances. Something like this: MSTP instance 1: VLAN 100 to 200 MSTP instance 2: VLAN 201 to 300 So, you have 2 instances, which means 2 CPU instances. Then you say that for the instance 1 the root bridge will be SW1 for example and for instance 2 the root bridges will be SW2, for example. This way, the alternate port which is not used in instance 1 will be used on instance 2 and vice versa. Did you get the idea? To mention that Cisco has something called Per-Vlan STP or Per-Vlan RSTP where the switch creates 1 instance for each VLAN. That means if you have 200 VLANs, then the Cisco switch will have to create 200 CPU instances, and that’s a wasting of resources, that’s why it is better to use MSTP where you can group many VLANs under one instance. On MikroTik, Per-VLAN STP and Per-VLAN RSTP are not available, so the Spanning-Tree works on port-based and not on VLA-based. Then, how will the scenario look when using MSTP?

As you can see, 2 instances have been created and for each instance, we elected a Root bridge. This is the way the segment which wasn’t used in one instance will be used in the 2nd and vice versa. Excellent! So you have got the idea why we need to use MST on MikroTik CRS 3xx switches, let’s see now what we need to make as a configuration to make it work. The first thing you need to do is to select the MSTP mode on all the switches. Secondly, you need to create a region name or more (depending on the size of switching network). The region name has to be the same on all MikroTik Switches. Thirdly, you need to create a region number which also should be the same on all switches. Moreover, you need to map the VLAN to the instances. For example, you say from VLAN 100 to 200, the instance is 2 and

from VLAN 201 to 300 the instance is 3. Finally, you need to change the priority in the bridge ID under one instance so the root bridge for instance 3 is different than the root bridge of instance 2. Briefly, we have to do the following: Region name (should be identical on all switches) Region number (should be identical on all switches) VLAN mapping Changing the bridge id for instance 3 Finally, it is mandatory to create the VLAN’s beforehand because MSTP is for VLAN’s and you have trunk ports between all switches. Now apply this on a LAB.

LAB:

I am still on the same LAB scenario, but I have created VLAN 10, 20, 30 and 40. I want VLAN 10 and 20 to be on an MSTP instance and VLAN 30 and 40 to be on another MSTP instance. Then I need to make the root bridge on the 2nd MSTP instance to be different than the one which is on the 1st MSTP instance, this way I can profit from using the segment which was used on the 1st MSTP instance. First, the VLANs have to be created and make the switches’ interfaces connected to each other as a trunk port (in this LAB I will not show how to create VLANs because by now you should already know how this can be done). Now we need to select the spanning-tree mode to be MSTP on all 3 switches.

This is how you can do it on the 3 switches:

You may have noticed when you selected the MSTP protocol mode, the Region name and Region Revision have been enabled. Now we need to put the Region name and Revision to be the same on the 3 switches. I am going to use the following: Region name = Region Region Revision = 1

Let’s apply this on the 3 switches like the following:

The last step would be to create the MST Identifier (MSTI) which is the instance as I have explained in the theory. I am going to create 2 MSTI as the following: MSTI 2 for VLAN 10 and 20 MSTI 3 for VLAN 30 and 40

This needs to be created on each of the switches like the following:

So the MSTP has been configured correctly. Now we need to check which one of the switches will be the root bridge for instance 2 is and instance 3. If we look at the LAB scenario, we see that SW1 has the lower bridge ID, so by the logic we should conclude that SW1 should be the root bridge from MST instance 2 and 3 because all priorities on the switches are the same and SW1 has a lower MAC address. Let’s check that to see if we are thinking correctly. I will check 1st for the identifier 2:

Indeed, it is the root bridge for the identifier 2.

Let’s check for the identifier 3 to see if SW1 is also the Root Bridge:

Also SW1 is the root bridge on the identifier 3. So our logic is correct. Now, this is not exactly what we wanted. We wanted to use SW1 as the root bridge for identifier 2 but SW2 to be the root bridge for identifier 3. How can we do that? Let us go to SW2 and show you what we need to do.

As you can see, from SW2 I move to identifier 3 and I lower the priority from 8000 to 7000. By doing that, SW2 becomes the root bridge on MSTI 3 because it has a lower bridge ID. Got it?

Let’s justify that on SW2 if it became the root bridge for MSTI 3.

Indeed it became the root bridge on MSTI 3. That means all its port will be designated on MTSI 3. Let’s check:

You see that for identifier 3, all ports of SW2 are designated because he is a root bridge but on identifier 2 we have 1 root port meaning that SW2 isn’t the root bridge for identifier 2. Let’s check now SW1 who is the root bridge for identifier 2. Are all his ports designated on identifier 2?

Yes indeed, all ports on MSTI 2 are designated port that means SW1 is the root bridge for MSTI 2 as you can see in the picture below:

Finally, I am curious to know where the Alternate ports happen on MSTI 2 and MSTI 3 because I haven’t seen it on SW1 and SW2. It should be on SW3, correct? Let’s check the port states on SW3 for both MSTI 2 and MSTI3:

As you can see, the Alternate ports are both on SW3. Ether3 is the alternate port on MSTI 2 and Ether2 is the alternate port on MSTI 3. Both are on 2 different segments, which mean that the alternate port on MSTI 2 is being used on MSTI 3 and vice versa. Wow! That’s a really great LAB. This is all about MSTP and also about Spanning-Tree Protocol that you require knowing for the MTCSWE exam, I hope you enjoyed this chapter and I see you in the upcoming one.

4 Link Aggregation In this chapter, I will talk about Link Aggregation on Switching or what we normally call Bonding on MikroTik. The idea is very simple. If we have a scenario like this, then one of the interfaces will be on Alternate because of STP as we have redundancy:

So you are wasting a whole complete link because of STP. Of course, we want STP to keep running to avoid having loops, but we also want to use the 2 links at the same time. So what options do we have? That will be bonding. With bonding, your group 2 or more switching links by which you will have a logical bonding interface and traffic can flow using the 2 interfaces together. When doing Bonding, the MikroTik switch will see the bonding interface as a normal interface and will apply STP on it, which means the STP will not be applied anymore on the member interfaces which are in the bonding one. So we will end up having this scenario:

Bonding on MikroTik Switches has different modes, some of them use the hardware offload while others will use the software offload. Below is a table showing the different modes of bonding on MikroTik CRS3xx switches:

As you can see, there are 7 different bonding modes available on MikroTik Switches. For the MTCSWE course, you aren’t really responsible to know all details about each mode, but I am going to explain briefly each mode by itself. Balance-rr mode: On this mode, there will be a round-robin load balancing. Slave interfaces in the bonding interface will transmit and receive data in sequential order. It will provides load balancing and fault tolerance. Active Backup mode: This mode provides link backup. Only one slave can be active at a time. Another slave will become active only when the first one will fail. Balance-xor mode: This mode balances outgoing traffic across the active ports based on the hashed protocol header information and accepts incoming traffic from any active port. The mode is very similar to LACP except that it is not standardized and works with layer-3-and-4 hash policy. Broadcast mode: When ports are configured with broadcast mode, all slave ports transmit the same packets to the destination to provide fault tolerance. This mode does not provide load balancing. 802.3ad LACP mode: It is Open standard IEEE 802.3ad dynamic link aggregation. In this mode, the interfaces are aggregated in a group where

each slave shares the same speed. Provides fault tolerance and load balancing. Slave selection for outgoing traffic is done according to the transmit-hash-policy. As in case of open standard protocol, this can be used on any vendor and the link aggregation will be formed without any problem. Balance-tlb mode: Outgoing traffic is distributed according to the current load on each slave. Incoming traffic is not balanced and is received by the current slave. If receiving slave fails, then another slave takes the MAC to address the failed slave. This is mostly used on Linux servers running 2 or more NIC cards to make the bonding. Balance-alb mode: It is an adaptive load balancing. The same as balance-tlb but received traffic is also balanced. The device driver should have support for changing its MAC address. In this book, I will focus on the open standard and vendor neutral-bonding mode which is 802.3ad or what we normally know it as LACP (Link Aggregation Control Protocol). As briefly explained, 802.3ad mode is an IEEE standard also called LACP (Link Aggregation Control Protocol). It includes automatic configuration of the aggregates, so minimal configuration of the switch is needed. This standard also mandates that frames will be delivered in order and connections should not see disordering of packets. The standard also mandates that all devices in the aggregate must operate at the same speed and duplex mode (so you should be careful that you set the same speed and duplex modes on the switch interfaces connected to each other) LACP balances outgoing traffic across the active ports based on hashed protocol header information and accepts incoming traffic from any active port. The hash includes the Ethernet source and destination address and if available, the VLAN tag, and the IPv4/IPv6 source and destination address. How this is calculated depends on the transmit-hash-policy parameter. When I do the LACP LAB configuration, you will see that there are 2 possible monitoring to use: ARP and MII. What are those exactly? As we are bonding 2 or more Layer 2 interfaces together, we need to monitor

that if one of the links goes down then the switch will not send frame to it anymore. With ARP monitoring, it will send ARP queries and uses the response as an indication that the link is operational. If an ARP reply is received from the other interface, then the switch knows that the link is operational and will keep sending the traffic from it. RouterOS sets the arpinterval to 100ms by default, but you can change it if you want. MII monitoring has the similar function as ARP, however, it will monitor the state of the local interface only, so when the interface is down the switch knows that the link is down and when the interface is up the switch knows that the link is up. The switch driver should support MII in order to use it; otherwise if it is not supported and MII is used then the link will always be shown up even when it will be down. Now back to 802.3ad mode, the ARP link monitoring is not recommended, because the ARP replies might arrive only on one slave port due to transmit hash policy on the LACP peer device. This can result in unbalanced transmitted traffic, so MII link monitoring is the recommended option. Enough theory, let’s apply a LAB for the Link Aggregation.

LAB:

I have this scenario where SW1 and SW2 are connected to each other from the interfaces Ether9 and Ether10. If we leave it this way, we will end up having an alternate port somewhere and 1 link cannot be used, so I need to apply to the bond by using the open standard 802.3ad and see if the bonding

will happen. First thing we should check whether those interfaces on the 2 switches have same speed and duplex. By default, on MikroTik Auto-negotiation is enabled and this could cause the issue, so better to hard code the speed and duplex on each interface ourselves. I will show you how you can do this on one interface then you know how to apply it to all other interfaces.

As you can see, I have set Ether9 of SW1 to 100 Mbps speed and a Full duplex. Same I did for Ether 10 on SW1 and the 2 other interfaces on SW2. Excellent! Now it is time to start doing the bonding. I will start with SW1. Remember we have to use 802.3ad mode and the link monitoring should be MII.

Same I should do on SW2 as the following:

If you see on SW2, the bonding1 interface, which was dynamically created, is running

Also you see that in this interface, it has been shown on the interface list with all other interfaces and has a type as Bonding as the following:

On SW1, you will see the same result. So now Bonding is formed and both switches have to use the 2 links to send and receive traffic at the same time, which means we are increasing the throughput. To try this, I will put an IP address on both bonding interfaces of the 2 switches so it is from the same range and I will use a Bandwidth test.

Let’s put an IP on SW1 bonding interface of 192.168.0.1/24

I will put an IP address of 192.168.0.2/24 on the bonding 1 interface of SW2.

I will ping from SW2 to SW1 IP address 192.168.0.1

As you can see the ping is working. By now the load balancing on Layer2 will be working without any issue. The last example that I wish to show you is Balance-rr. I will move the Bonding mode to Balance-rr and will do a Bandwidth-test and see the result. Remember on Balance Round Robin, one frame will be sent from one link and another one from another link. Will start with SW1:

Will do the same on SW2

I will ping from SW2 to SW1 and see the result.

Last thing, I will open BW test from SW2 to SW1 and see if the 2 links are being used at the same time.

Excellent, both links are fully used now. This is how you can increase your throughput on Layer2 using link aggregation.

5 Port Isolation In this new chapter, I will speak about a very nice feature that we have on MikroTik switches which is port isolation. Port isolation is a process in which you isolate a port (or group of ports) from another port (or group of ports). This means that the end device which is connected to one port will not be able to communicate with the end device connected to the other port. That’s a problem that you can solve when you have a company having only 1 MikroTik CRS3xx switch. So instead of going through VLAN to segment the Layer2 network, you can use port isolation which is much easier to be configured and works perfectly. On MikroTik CRS3xx, port isolation is available on the switch chip since RouterOS v6.43, which means it is hardware-offloaded and it doesn’t go to the CPU. There are 2 different scenarios where you can use port isolation which is: Isolated Switch Groups Private VLAN Both have the same idea as explained. But let’s start with the Isolated Switch Groups and do a LAB for it.

Isolated Switch Groups:

As you can see, I am grouping ports on the switch where it is possible for the devices to communicate with each other, but they cannot communicate with the other group. Bear in mind that STP (when used) is not aware of the underlying port isolation configuration, therefore there are no separate spanning trees for each isolated network; instead there is a single one for all isolated networks. This can cause some unwanted behavior (e.g. devices on isolated ports might select a root bridge from a different isolated network). That’s why I highly recommend using port isolation in case of only 1 Switch scenario and not more.

Another important notion is to remember to have HW-offload enabled on the ports that you want to use in the port isolation because if HW-offload is disabled then port isolation will not work and the RouterOS will not notify you about this.

Let’s move to the LAB now.

LAB:

As you can see, the plan is to isolate 2 ports from each side, so Ether15 and Ether16 can communicate to each other but not to Ether17 and Ether18. Same will be done on the other side. Let’s see how this can be applied. 1st we need to add all those ports to a bridge, and we should be sure that the HW-offload is enabled. I am sure by now you know how to add the ports to the bridge, so I will not show it again but here is the end result:

After having all the ports in a bridge and enabling the hardware-offload, we need to go to the Switch tab on Winbox and we have Ether15 to speak only to Ether16, and Ether17 to speak only to Ether18. Let me show you how to do this:

Same process will be done between Ether17 and Ether18. Very good, so the end result that we have is the following:

Now I will put 2 end devices one connected to Ether15 and another one connected to Ether16 with IPs from the same rage and I will ping from one to

another. Based on what we have configured, the ping should work. Let’s try.

As you can see, the ping is working. I will move the cable of my PC to Ether17 while keeping the other device plugged on Ether15 and will re-do the ping. Do you think it will work too? Let’s try:

As you can see, it is not working. So, the port isolation is perfectly working without any issue.

Private VLAN There is another scenario where you can use port isolation. The scenario is called Private VLAN. Even though the name includes VLAN, but we are not going to use VLANs at all. The idea is very easy. Let me show it to you with the help of an illustration.

As you can see, if you have many servers and you don’t want them to communicate with each other, you can do port isolations and allow their connected switch port to communicate only with the port connected to the Router so they can be connected to the internet. So you isolate as following: Ether1 and Ether4 Ether1 and Ether3 Ether1 and Ether2

For this scenario, I will show you how this can be configured using the command line: 1st you need to put all interface in a bridge with Hw-Offload enabled: /interface bridge add name=bridge1 /interface bridge port add interface=ether1 bridge=bridge1 hw=yes add interface=ether2 bridge=bridge1 hw=yes add interface=ether3 bridge=bridge1 hw=yes add interface=ether4 bridge=bridge1 hw=yes Then you need to isolate the ports so they can only communicate with Ether1: /interface ethernet switch port-isolation set ether2 forwarding-override=ether1 set ether3 forwarding-override=ether1 set ether4 forwarding-override=ether1

As you can see, we didn’t use any VLAN here, so Private VLAN is just a name but VLAN’s aren’t used.

Bridge Horizon Another method of using port isolation is Bridge Horizon. If you are using a MikroTik switch that doesn’t support HW-offload, you can still do port isolation but using the Bridge Horizon.

The bridge horizon option can be found on the ports inside a bridge as follows:

The idea is as follows: if you have the same bridge horizon on 2 or more ports, then they are not able to communicate with each other. In another word, traffic will not flow out of a port with a horizon value the same as it came in. Again remember, the ports in the bridge should have HW-offload disabled. Let’s do a LAB to see if this is going to work or not.

LAB:

Let’s put Ether15, Ether17 and Ether18 in a bridge port and be sure that we have hardware offload disable. I will show you how to do that on Ether15 only and you can do the same for Ether17 and Ether18.

As you can see, Hardware offload is not checked. Now all ports have been added to the bridge and we have the following result:

Let’s assume that I want to enable Ether15 to communicate with Ether17 but not to Ether18. Remember, if you want to put the Bridge Horizon on the ports in the same way, then they won’t be able to speak to each other.

For this, I will put the following bridge horizon values on the interfaces: Ether15 = 2 Ether17 = 3 Ether18 = 2 Doing so, Ether15 and Ether18 will not be able to communicate with each other, but Ether 15 and Ether17 will communicate. Let’s do the change on Ether15:

I will change the Bridge Horizon on Ether17 to 3 as following:

And finally, will make the Bridge Horizon on Ether18 as 2:

Here is the end result:

I will connect 2 end devices: one from Ether15 and another one from Ether17. They both have IP’s from the same range. Then I will ping from one to another. Based on the learned theory, they should be able to reach each other because the bridge horizons on Ether15 and Ether17 aren’t the same. Let’s try:

Here you go. You see the ping is working. Now I will move the device from Ether17 to Ether18 while keeping the 1st one on Ether15. That means we have one side on Ether15 and another side on Ether18. Both interfaces have the same Bridge Horizon which is 2, so they should not be able to reach each other. Let’s try:

Indeed, the 2 end devices aren’t able to reach each other. That’s what I was expecting.

So, this is all about Bridge Horizon and port isolation.

7 Quality of Service (QOS) On Layer 2 we can implement Quality of Service (QOS). That means for example we can prioritize one type of traffic over other ones. This can be done using the Layer2 QOS 802.1p standard. We have seen that when we were talking about the VLAN where a header will be inserted in the frame. This header will contain the Priority Code Point (PCP) which is 3 bits and which can be used for the Class of Service.

If we compare Layer2 QOS (802.1p) to Layer3, here will be the result:

As you can see, on Layer 3 the QOS is DSCP while on Layer 2 is 802.1p. You can map the income Layer 3 TOS (DSCP) to a Layer 2 COS (802.1p). This can be used using the firewall mange rules; also it can be done from the bridge filter level.

Now if we want to implement QOS on Layer2, the old way was to use Queues (whether simple queue or queue tree). For this to make it work, we have to enable the setting “Use IP Firewall” under the bridge as you see below. By doing that, the created queues can work on the bridged traffic, but you will be using the CPU and HW-offloading should be disabled.

The other option is to use the bridge filter to mark packets then use parent=interface in the queue tree. Also, this option will use the CPU so we will have a higher CPU load on the switch and all Layer 2 frames will go through the CPU. This is not the best way to do QOS on Layer 2. For this reason, on the CRS3xx series, there is a much better way to apply QOS while keeping Hardware-Offload. If you go to the interface from the Switch Tab on Winbox, you will see a possibility to limit Ingress Rate and Egress Rate.

On the Ingress (incoming traffic), the CRS3xx will use policing. That means if the traffic is exceeding the threshold it will drop the additional frames. On the Egress (outgoing traffic) it will use shaping that means it is nicer on the traffic compared to the policing one and will drop them but not as firm as on policing traffic.

This can be seen here:

As you can see, on Ether15 I am liming the ingress Rate to 3Mbps and Egress Rate to 2Mbps. Let’s do a LAB to see if this will work. LAB:

I have SW1 and SW2 connected to each other on the Ether1 interfaces. Let’s put the interfaces on a Bridge one and be sure that HW-offload is enabled. We start with SW1:

We do the same on SW2:

Now both interfaces are in a bridge interface as each switch. I would like to assign an IP address on Ether1 of SW1 of 192.168.0.1/24 and on Ether2 of SW2 of 192.168.0.2/24 so I can run Bandwidth-test and see how much traffic they can do without any limitation, then I will do a limitation on one of the switches. Let’s start with SW1:

We do the same on SW2 and we put an IP address of 192.168.0.2/24 on interface Ether1

Let’s do BW-test from SW2 to SW1 and see how much I will be getting on both upload and download as traffic.

To do that, you have to go to SW2 and from Tools you go to Bandwidth, finally you put the IP of SW1 with both directions and you click start:

As you can see, I am able to reach almost 100 Mbps on Tx and 820 Mbps on Rx.

Now, let’s apply the QOS on layer 2 from SW1. I can only give 5 Mbps on ingress and 5 Mbps on egress. Then, we re-do the test and see the result.

So we have limited the ingress and egress traffic on Ether1 to be 5 Mbps on each direction.

Let’s do the BW-Test from SW2 again and see the result.

As you can see, it is not exceeding the 5 Mbps on each direction. So our configuration is correctly done.

There is another way also to limit the traffic using the Switch Rules. On the switch rule, you can limit only the ingress traffic and have plenty of matching options such as: MAC address Port VLAN Protocol DSCP And so on. Depending on each switch model, the number of rules entries will be bigger or smaller. There is a list of number of rules entries that you can use on most MikroTik Switches below:

My MikroTik Switch is CRS326-24G-2s+, so I can have up to 128 rules. Let’s apply that on a LAB. Before I do the LAB, I have removed the ingress/egress rate on Ether1 of SW1.

LAB:

I am still on the same scenario. I want to limit the traffic from SW2 to SW1 to be 5 Mbps using the ACL.

I will go to SW1 and do the following:

Remember, this will limit the ingress traffic to SW1. As you can see, on the Match I do have a lot of options that I can use such as MAC, VLAN, IP Address, Protocol, etc. Depending on your case, you can use any of the available matchers.

Let’s do now BW-Test from SW2 to SW1 (I still have IP addresses on Ether1 interfaces of SW1 and SW2).

As you can see, it is only limiting the Tx on SW2 which is the ingress of SW1. If you want to limit the Rx on SW2, then you need to create a rule on SW2 to limit the ingress, this way you are limiting both Tx and Rx. This is all the necessary procedure you need to do in QOS of this course.

8 Layer 2 security In this chapter, I am going to speak about Layer 2 security. As the MikroTik Switch works on Layer2 then we need to use some techniques to be able to secure our Layer 2 switching network.

IGMP Snooping The 1st feature that I would like to discuss is the IGMP Snooping. What is IGMP snooping? IGMP snooping is a method that network switches use to identify multicast groups, which are groups of computers or devices that all receive the same network traffic. It enables switches to forward packets to the correct devices in their network. The Internet Group Management Protocol (IGMP) is a network-layer protocol that allows several devices to share one IP address so they can all receive the same data. Networked devices use IGMP to join and leave multicasting groups, and each multicasting group shares an IP address. However, MikroTik network switches cannot see which devices have joined multicasting groups, since they do not process network layer protocols. IGMP snooping is a way around this; it allows switches to "snoop" on IGMP messages, even though they technically belong to a different layer of the OSI model. IGMP snooping is not a feature of the IGMP protocol, but is rather an adaptation built into some network switches.

Let’s see what the result would be if we didn’t have IGMP snooping enabled on the MikroTik CRS3xx Switch:

As you can see, all devices received the multicast including the one that shouldn’t have received it. Let’s see now if we enable IGMP snooping what would be the result:

As you can see only the PCs, which was supposed to receive the multicast stream, have received it and that’s because of the enabled IGMP snooping feature. Depending on which MikroTik Switch you are using, IGMP snooping may work on the switch chip as hardware-offload or on software offload. To enable IGMP snooping is very easy. You have to go under the Bridge interface and check IGMP snooping as following:

You will see that once you check the IGMP Snooping, you will have a new tab added called “IGMP Snooping”. Let’s check what is inside of it.

The IGMP version 2 is for Multicast stream using IPv4 and MLD version 1 is for Multicast streaming using IPv6. Normally you should not adjust the settings under the IGMP snooping tab unless you know what you are doing. That’s all you need to know about IGMP Snooping.

DHCP Snooping

Another topic that may be relevant is on Layer 2 security is DHCP Snooping. This is a feature that is available on MikroTik CRS3xx switches as well as all other MikroTik switches. DHCP snooping works as hardware-offload on MikroTik CRS3xx switches. Once you enable the DHCP snooping on the switch, you can select the port connected to the DHCP server as a trusted port. That means that in case any other Rogue DHCP Server is placed on any un-trust port then it won’t be able to communicate with the DHCP clients. If we don’t use DHCP snooping, you may have someone putting a Rogue DHCP server and will lease IP addresses as well as Gateways to the DHCP clients, so all their traffic will pass via this DHCP server which can have a sniffing tool which is capable of intercepting all traffics: This is called “Man-in-The-Middle” attack. This is the simple explanation of the DHCP Snooping; let’s apply it on a LAB.

LAB:

Here R1is acting as a DHCP server which is already configured. I will create a bridge on SW1 and put inside of it Ether1, Ether2 and Ether3 (will use Ether3 later in this LAB). So the result will be as following:

As you can see, the 3 interfaces on SW2 are in a bridge and they are hardware offloaded.

To enable the DHCP Snooping, you need to go to the Bridge port and check DHCP Snooping as following:

Then you go to Ether1 interface (which is connected to the DHCP server) and you make it trusted, and Ether2 will be left un-trusted.

Here how you can make Ether1 as a trusted port:

Now we make Ether2 (connected to the PC) and Ether3 as un-trusted ports. By default, MikroTik switches have all ports un-trusted, so you don’t have to do anything. Here is an example on Ether2 (Ether3 will be the same):

Now let’s see if the PC will receive an IP address from the DHCP server. The PC is connected to Ether2 which is an un-trusted port.

Yes indeed, it has received an IP address. Now what I need to do, is to move R1 from interface Ether1 on SW1 (which is a trusted port) to interface Ether3 (which is an un-trusted port), then I need to check whether my PC will still receive an IP address from the DHCP server. Let’s try. I have moved the cable from Ether1 to Ether3 on SW1. Let’s check the PC now. I will release all IP addresses received on the PC using the following command:

You can see that my PC has released the assigned IP address. Now let’s try to renew the DHCP negotiation to see if the PC will receive an IP address from the DHCP server again which is plugged on Ether3 of the switch which is an un-trusted interface.

As you can see, my PC did not receive an IP address from the DHCP server and it has received an APIPA IP address from the Window’s operating system. This way we can secure ourselves from Rogue DHCP servers in our network.

Loop Protect Another Layer2 security feature is loop protect. As its name suggests, it is a protecting loop that exist on Layer2. I know that you may be wondering that we have seen this when we spoke about the Spanning-tree protocol. That’s true. For example if you have 1 Switch in your network and you want to protect yourself from loops, then you can use loop protect. When you do have more than a switch, it is highly recommended to use Spanning-Tree Protocol such as RSTP. How Loop Protect works? First, you need to enable it on the interface level. Once enabled, it will send loop protect packets every 5 seconds by default. The loop to protects work by checking the source MAC address of the received loop protect packet, if it matches the MAC address of the loop protect enabled interface then it knows that there is a loop and will disable the interface for 5 minutes by default. That’s how loop protects work. You can enable Loop Protect on Ethernet, VLAN, EOIP and EOIPv6 interfaces. It can also be enabled on the bridge interface but in this case it is highly recommended to use Spanning-Tree. Remember that Loop Protect can be used on cases where we have 1 switch, when we have more than one switches please go to Spanning-Tree protocol.

interface and put the rate that you want to be uses for the Storm as following:

As you can see, I have limited the Broadcast storm to 10% of the link. That means if my link is using 100 Mbps, then only 10 Mbps will be occupied for the broadcast storm which makes my network still operational. That’s all about Traffic Storm Control.

Layer 2 Firewall Still speaking about the Layer2 security features, there is a Layer 2 Firewall. As on Layer3, Layer 2 has also firewall possibility. That’s something we can see under the bridge filter. Most engineers like to use this feature to disallow some hosts MAC address to enter to a network or maybe to use some protocol. As you use Layer3 Firewall, on Layer2 firewall the top entries are treated before the one down and the one matched then the other entries after the matched one will not be checked.

On CRS3xx series, Layer2 Firewall rules will work on the Switch Chip.

Here an example to disallow one PC MAC address, who is abusing our traffic, to work on our network:

As you can see, we have different Matchers and Actions to be used. That’s all about Layer2 Firewall.

ARP Also, another Layer2 Security feature is on the ARP level. As you know, every

network device needs the source and destination MAC/IP to be able to send the data. In case the device doesn’t have the destination MAC address, he will issue an ARP request using broadcast, and the one who receives the broadcast and has the destination IP address will answer by his MAC address to the source. The MikroTik Switch will leave the MAC addresses learned dynamically in his ARP Table. By default, the MikroTik RouterOS ARP table can have up to 8192 entries, but of course, the more entries you have on the switch the more resources on CPU and memory are used. For this reason, you can make the ARP entries that you want static. The best way to do this is in combination with the DHCP server. Once you know that your network is converged, meaning that you know that all devices have received the IP address of the DHCP Server, then you can make them Static.

LAB:

Let’s say that my network is converged now. On the DHCP Server Router, we can see on IP arp that there is a dynamic entry for my PC:

What we can do, we have go to the DHCP server and say that all the customers which have received a leased IP address from the DHCP service will have an ARP entry. Let me show you how:

Let’s check the ARP table again, we should see that the entry there.

To make it even more secure, we make it static and on Ether1 we say ARP reply only. Let’s 1st make it static:

Now we go to Ether1 and we make the ARP to be reply-only:

ARP=Reply-only means that the router will reply to the ARP requests for only the entries which are on the ARP table. That means if we now request again for an IP address from the DHCP server to our PC, he will provide him the IP address because the PC MAC address is in the ARP table of the router. In case we put another PC on Ether1, then he will not get an IP address because that 2nd PC is not in the ARP PC of the router.

As we spoke about the ARP modes on the interface and we have seen about ARP-only, they are many other modes that I would like to explain to you as well.

Enabled: Default setting on the interfaces. ARP dynamic entries will be added to the ARP table and ARP requests will be answered. Disabled: ARP is disabled that means dynamic ARP entries will not be added and the ARP requests will not be answered. Reply-Only: the router will reply to the ARP but will not add entries to the ARP table. Proxy-ARP: Mostly used on VPN when you have 2 networks using the same

range of IP addresses, then the router will act as a transparent ARP proxy between the 2 networks. Local-Proxy-ARP: mostly used when you have port isolation and you want the devices connected from each of the isolated ports to communicate to each other. With this mode, the router will reply to all client’s hosts with its own MAC address. Hosts Table Another feature of Layer2 security is to make the entries on the bridge as static ones. If we look to the bridge, we can see the following entries in its Hosts table which are learned dynamically:

If you want to make them static for security reason, you can do so. Just click on + and add an entry (I will do the one on Ether2):

You will see right away that it is now static. To even make it more secure, you can go to the bridge interface and enable ARP mode to be reply only, so the bridge will answer to ARP requests for entries which are inside it Hosts table.

Switch Host Table In addition of the bridge level, you can see on the Switch Tab also a table for Hosts as you can see here:

Those ports are showing here because they are hardware-offloaded. You can

also make them static if you want by creating them manually the same way we did on the bridge level. Additionally, you have some more options that you can use when creating a static entry on that level. The options are: Copy to CPU: that means all traffic on this internet will send a copy to the CPU. This is handy in case you want to use some tools like Netwatch because on the Switch Chip level it is not possible to sniff the traffic. Redirect to CPU: here means that all traffic will be sent to the CPU then everything will be software offloaded and there won’t be any benefit from the Switch Chip hardware offload. Drop: means the traffic will be dropped s: You may need a case when you have 2 ports that are mirroring each other, then you send the same traffic on one port to another one. This can be used when using Wireshark for example to capture some data.

Port Security Also, another feature on Layer2 Security is called Port Security. Let’s say that on Ether2 port we have a PC connected and we don’t want any other device connected to this port to work but my own PC. Then I can apply Port Security from which only the MAC address of my PC is allowed to be passed and any other MAC address is not allowed.

LAB:

Still on the same LAB scenario. I want my PC only to be allowed to pass from Ether2 of SW1 and anything else not able to. First, we need to take the MAC address of the PC and save it:

Now I will go to SW1 and create a rule on the Bridge filter than any device which doesn’t have the MAC address of my PC will not be allowed.

Let me show you how you can do that:

Here I am saying that any traffic coming from MAC address which is different than the MAC address of my PC and going via the switch to somewhere else (so traffic not going directly to the Switch itself) from the interface Ether2, in the next picture I have to say to be dropped as following:

I have also put a Log so in case someone plug a PC which is not my PC, then a Log will be shown. Now as long as I am connecting my own PC, I am able to work normally

without anymore. I will put another PC and see if it will receive an IP address from the DHCP server.

No IP address from the DHCP server has been received. If we check the Logging in SW1 we should see an entry as the following:

802.1X Port Based Authentication Another nice feature on Layer 2 is 802.1X Port-Based Authentication, normally referred to dot1x. This option is available on RouterOS since version 6.45. It provides port -based network access control using EAP over LAN known as EAPOL. The Dot1x has three components: Supplicant (Client): it is the one requesting access to the network which can be a user workstation, print, IP phone but also can be a router or a switch. Authenticator (Switch): it is the device receiving the request from the

supplicant and forwarding the Dot1x credentials from the supplicant to the authentication server. This is what is mostly the job of the MikroTik Switch. Authentication Server (Radius): It is the device that has the full user database and allowing/disallowing the authentication of the supplicants. Most people use a Radius server for this job because most Radius servers support 802.1X. To mention that the Radius server doesn’t need to be in the same LAN with the Authenticator, it could be over the public internet. If this is the case, it is highly recommended to use an encrypted tunnel to the Radius server. Extensible Authentication Protocol (EAP) is the protocol used to allow the authentication to pass between the supplicant and the authentication server.

Here below in the illustration you can see how the communication can happen between the supplication and the Authentication server.

Let’s start speaking about the supplicant. As said, the supplicant is the device requesting to join the network. The MikroTik RouterOS can be also a supplicant device with the 802.1x Authentication. The make the MikroTik Switch /Router as a supplicant, you can go to Dot1X Tab on Winbox and fill the client tab with the information needed as following:

You see there are different EAP Methods to be used which are: EAP MSCHAPv2 EAP PEAP EAP TLS EAP TTLS

In most cases, you require to make the MikroTik Switch as an Authenticator. You can do the Switch as an Authenticator on one or more ports. To make it as an Authenticator it is straight forward as following:

That’s all you need to know about Dot1X.

Securing Switch Access In the last part of this chapter, you need to know how to secure the MikroTik Switch Access. There are different things you can do to harden your MikroTik Switch so you do not allow anyone to access the switch. For this you require to do many steps to harden the switch as follows: 1. User a different username from admin and put a new password, then delete the default admin username. 2. Allow only some IPs to login to the Switch. Those IPs should be for the administrators. This can be done from here:

3. Disable unsecure services that a hacker may use to access your switch such as Telnet and FTP.

4. Update the RouterOS to the latest version. MikroTik provide every now and then a newer version fixing bugs and adding new features. So it is highly recommended to upgrade your RouterOS to the latest stable version. 5. Upgrade the RouterBOARD firmware to the latest version as the following:

6. On RouterOS you can do MAC-Telnet, MAC-Winbox and MAC-Ping. That means you use Layer2 addressing to do Telnet, Winbox and ping. If you don’t require that, you can disable them as following:

7. Neighbor discovery is another way that a hacker can use to discover neighbor switches on the network. You can disable that option if you want.

8. Switch ports that are un-used you better disable them completely so you don’t allow un-authorized access to your network by just plugging a UTP cable to one of the available ports. 9. It is a good practice to disable Bandwidth-test because some hackers use this tool to issue some Denial of Service Attack (DOS). When BWTest is operational, the Switch CPU is 100% then the Switch won’t be able to forward frames correctly and will end up sending frames to all ports which a hacker can profit from that to capture the network frames.

10. Put the right clock on the Switch. You can use an NTP client. It is very important that the clock is correctly especially when you want to check the logs in case of any problem. 11. Disable packages that your switch will not be using. Think of Hotspot, IPv6, MPLS, PPP, Routing, Wireless. All those packages are not needed for a Switching job. 12. Use the Layer 2 Firewall that is available on the Switch when you want to filter Layer 2 traffic. This is a very important feature that MikroTik switches have, so why not to profit from it. This is everything I wanted to show you on Layer 2 Security, see in you the upcoming chapter.

9 Power over Ethernet (PoE) Power over Ethernet which we normally refer to it as PoE is to send power inside the copper wire. Why do we need to do that? just to power on another device without the need that he should the power adapter. On MikroTik hardware, there are different PoE supported types which are: Passive PoE up to 30V and 57V IEEE 802.3af/at (PoE & PoE+) There is also a new type which is PoE++ 802.3bt, but this is not yet available on any MikroTik hardware up to today. The difference between active PoE such as 802.3af/at and the passive PoE which is anything not 802.3af/at, on active PoE devices check the power that is coming in before powering up. The PoE voltage is always 44-57V. While on the passive PoE, there will not be any handshake before powering up. The voltage can be from 18-57V, so you need to check before selecting the PoE injector or which PoE port mode to use. On MikroTik you have 3 PoE-out modes: Auto on: on this mode, the Power Sourced equipment will check for a resistance on the connected port. If a correct resistance range is detected, the power is turned on. The device will also be checking and in case he detects that the cable is unplugged then the port will stop the power until he detected that the powered device again. Forced on: in this mode, the port will always apply power to the port. Even if no cable is connected, the device will keep applying power to the port. Off: the Power Source equipment will not do any detection on the port to see if he need to apply power on the power, and the port will remain powered off. So at that end, the port will behave just like a normal Ethernet port to pass data only.

Let me show you where you can find the PoE modes.

Do you see them? You only need to go to the port which has the PoE capability, and you select the mode that you want to use.

PoE Priority Settings On PoE you have the possibility of priority setting. Why do we need to use the priority? Well think that you don’t have enough power to all connected devices, then the switch will work on the priority and the port which has a higher priority will be carrying the power to the received powered device before other ports. Think if you have a switch of 48 ports that provide PoE on all the ports. And on all his ports you have IP cameras connected to it. You can specify on each port the priority. The priority can go from 0 to 100 where 0 is the best and 99 is the least. This way in case you don’t have enough power, the ports which have a higher priority will carry power to the IP camera and the others will not.

Let me show you where you can change the priority:

PoE Monitoring The last thing to speak in this chapter is how you can monitor the PoE which is provided to the other device. The easiest way to do that is to write the following command:

In my case my Ether5 is not powering any device yet, so you see it is saying waiting-for-load. You can also monitor the PoE on the following: Check the PoE-out LED Checking the warnings that you can in the GUI/CLI Check the SNMP traps that you are receiving Check the log from your logging system

All those are ways to monitor the PoE. Another nice feature that MikroTik provided us is to restart the connected device in case it is not reachable. That means you will enable a ping from your PoE device to the powered device, in case the ping is not sent back then the MikroTik switch will assume that the end device has been crashed and will restart it. Let me show you how you can do that.

As you can see, the ping will go to 192.168.0.1 which is the powered device. In case in 2 minutes he doesn’t hear any ping back, then he will power cycle the end device. Of course, 2 minutes is somehow long, so you can make it less if you want. That’s all what you need to know about PoE, I hope you enjoyed the chapter and see you in the upcoming one.

10 Tools When you want to diagnose Layer 2 problems, you require to use some tools that MikroTik RouterOS offers to us. In this course, we have passed through most of the tools, so you can consider this chapter just like a refresher of what we have seen previously as tools.

Switch Stats You can view the reporting statistics that the switch chip is providing by using the below command. This is useful so you can monitor how many packets are being sent to the CPU from the switch chip

Bridge Filter Stats In case you have bridge filter rules that you have created, using this command will show you how many packets/bytes are matched by those rules.

Bridge VLAN Table This is something we have seen it during the course, the Bridge VLAN Table shows you the ports to VLAN mappings once you create the VLAN’s and you assign to them ports as tagged or untagged

Bridge Ports If you want to know the interfaces roles for the spanning-tree, you can go to the Bridge Ports. There you can see the role of the interfaces inside the bridge whether they are Designated port, root port, Alternate or disabled. Also you can see which ports are pars inside this bridge

Bridge Hosts Table In this table you can see the MAC addresses that are learned on the bridge ports and the VLAN IDs that are associated with them in case you are applying VLANs in your network

IP ARP Table In this table, you will see all ARP entries learned in the switch

Interface Stats and Monitoring If you want to monitor 1 particular interface on the switch to see for example if it is up, what speed is set to it, if it is full duplex and so on, you can write the following command:

There is also a possible way to monitor all interfaces at the same time. You can use the following command:

Port Mirror In case you want for example to sniff some Layer 2 traffic, you can copy all traffic from one to another port where you can apply the traffic capturing – this is called Port Mirroring. With Port Mirroring you mostly use a capture software such as Wireshark where you can capture the traffic. Port Mirror working in the switch chip that means that the Mirror-Source that is the port that you want to copy its traffic and Mirror-Target which is the port where you wish to capture this traffic should be on the same switch chip. To apply the Port Mirror is very easy, you need to go to the Switch tab and there select the Mirror Source and Mirror Target as following:

Sniffer (and Torch) If you want to use packet sniffer or torch tools with an HW-offloaded bridge, then you will see only input/output traffic like broadcast/multicast. That means you will not be able to sniff the normal traffic because remember the traffic is not going to the CPU and the sniffing can see only traffic going to the CPU. To be able to sniff all traffic with HW-offloading we need to use ACL rules to copy the traffic to the CPU, then we are able to sniff. Copying the traffic to

the CPU will not affect the original packet forwarding but it can cause an extra CPU load to process this packet.

Let me show you how you can create the rule to copy the traffic to the CPU:

Now if you do torch or sniffing, you will be able to capture all traffic passing on Ether1.

Logs A Log is a tool that I personally use a lot when I have issues. It just gives me some valuable information so I can know where the problem is. You can also send all the logs to an external Syslog server in case you wish to save them somewhere else. Checking the logs is very easy, you just need to go to Log and see the logging there.

As you can see, my logging is showing me on my switch that I have excessive or late collision on Ether2 which could be a link duplex mismatch, then now I can address the problem and solve it. Of course I should change also the date/time because it is set to year 1970 which is not good.

SNMP The RouterOS supports SNMP which is a protocol that is used for monitoring. If you have free-of-charge software to monitor all your network, you can use MikroTik the Dude software which is free of charge and it works on SNMP. This software will show you graphically how your network is connected and will poll information from all devices and show to it on The Dude. If interested, I have a complete online video course speaking about MikroTik the Dude and covering all topics about it. You can register for the course at the following URL: https://mynetworktraining.com/p/mikrotik-monitoring-with-labs This is all you need to do about the Tools in this course, I hope you enjoyed it and I see you in the upcoming chapter.

11 SwOS MikroTik has another Operating System that you have used on the MikroTik Switches 3xx series. This OS is called SwOS. It is totally different than RouterOS and you can access it via web user interface only. That means that each CRS3xx series can have both options to use as OS: RouterOS (which is enabled by default) SwOS (which you need to enable it to run on the switch) Note that in case you have made some configuration on the switch on RouterOS, the same configuration will not be used in case you shift to the SwOS. Let’s do a LAB to see how we can go from RouterOS to SwOS. LAB:

As you can see, I have my PC connected directly to the CRS3xx switch on its port Ether1. My task is to load SwOS on SW1 and to be able to access to it.

As you can see, once we use SwOS then it will have an IP address of 192.168.88.1 which can be reached on any of the ports. There is also DHCP with fallback that means that the switch can receive an IP address from the DHCP server in case we have one connected to the switch (in our case we do not have a DHCP server).

The 2nd thing we need to do is to say for the CRS3xx switch to load SwOS and not RouterOS. Let me show you how you can do that.

From System you go to RouterBOARD and from there to Settings then you change the Boot OS to SwOS and you click on OK. Now let’s reboot the Switch and we should have an IP on our PC of 192.168.88.x range so it can see the SW1 (which I have it already on my PC).

Now SW1 has been rebooted, let me do a ping from my PC to the IP of SW1 which is 192.168.88.1.

Excellent!!! I have a ping reply. That means I can access SW1 from the web interface. I will open the browser and write 192.168.88.1 on the URL and see.

Very good, it showed up. I will put Username admin and no password (which is the default one) then sign in.

MikroTik SwOS is showing up and now I can configure SW1 from the SwOS.

LAB: Now that we have access to the SwOS, let’s see the settings that you can do. We are still on the same LAB scenario.

You can see, we can do a lot of configuration using SwOS such as Port Isolation, Bonding, Spanning-Tree, VLAN, ect….

1st let’s see the System tab:

As you can see, from the System Tab you can do the following: Change the Switch IP Change the identity Allow specific IP address to login to the switch Allow specific port address to login to the switch Enable/disable MikroTik discovery protocol Enable/disable DHCP snooping Check the switch temperature Change the password

If we check from the Upgrade tab, you can do upgrade for your SwOS. Of course you need the switch to be connected to the internet so the upgrade can be done. Upgrading the SwOS is different than upgrading the RouterOS

Because my switch is not connected to the internet, it couldn’t see if there is a newer version available to be downloaded and installed. Now let’s do another LAB for Port Isolation

LAB: Port Isolation

Still on the same LAB scenario. My mission is to make Ether1 to be a management port. I don’t want any other ports on the switch to be able to communicate with Ether1, so that means I need to isolate it from all other ports on the switch. Let’s do that.

I have deselected all ports, so now Ether1 has been totally isolated and no other port on the switch can communicate to it. Don’t forget to click on Apply All so your change will be applied. If I ping now from my PC to SW1, I should still receive a ping reply:

Now let’s do another LAB to show you how you can use Bonding on SwOS.

LAB: Bonding

In this LAB, I have SW1 and SW2 are connected to each other on the Ether21 and Ether22 interfaces. I want to do bonding between the 2 interfaces. Those switches have the following OS installed on them: SW1: SwOS SW2: RouterOS Configuring bonding on RouterOS we already know because we have seen it in this course. However, I will repeat it again in this LAB to be like a refresher for you. Let’s start configuring bonding on the SwOS switch which is SW1:

You need to go to the LAG tab which is the abbreviation of Link Aggregation Group then you go to ports Ether21 and Ether22 and make then active then click on Apply All.

We go to SW2 and we enable Bonding but this time using the RouterOS.

Let’s see if the bonding has been formed:

Here we go, we have an “R” and this means that the bonding is running. Bottom line, you don’t need to have 2 SwOS switches to make bonding, you can use another switch with RouterOS or even another vendor as well and the bonding would work. In the last LAB of this chapter and of this course, I want to show you how you can configure VLAN on the SwOS. I am going to re-do the LAB of the VLAN that we have done in the VLAN chapter, but this time I am going to use SW1 with SwOS and SW2 with RouterOS

LAB: VLAN’s on SwOS

As we worked on the VLAN LAB, I am going to have R1 to be a DHCP server on both interfaces Ether2 and Ether3. Then I have to configure SW1 to have Ether2 an access port on VLAN20 and Ether3 to be an access port on VLAN30. Ether1 on SW1 should be a Trunk port. Same we do on SW2, Ether1 a trunk port, Ether2 access port on VLAN20 and Ether3 an access port on VLAN30. Once we finish the configuration, R2 should receive an IP address on its Ether2 interface from the range of 10.20.20.0 and on its Ether3 interface from the range of 10.30.30.0. Let’s start configuring the DHCP server on R1 on interfaces Ether2 and Ether3

Let’s put an IP address on Ether2 of 10.20.20.1/24 and on Ether3 10.30.30.1/24

Now let’s configure the DHCP server on Ether2

Will do the same on Ether3. The result will be:

Excellent. Now I will configure SW2 which has RouterOS because we already know how to configure it, then I will do SW1 which has the SwOS. Will start making the bridge on SW2 and then add the ports Ether1, Ether2 and Ether3 to it:

Let’s start configuring VLAN’s on SW2. We go to the bridge then to port Ether2 and we give it a pvid of 20

We do the same on port Ether3 but we give it a pvid of 30

Now we need to tell SW2 which port is Trunk and which port is access and on which VLAN. Remember Ether1 should be trunk, and Ether2 should be access on VLAN 20 and Ether3 should be access on VLAN 30.

The last step is to enable VLAN filtering on the bridge. Be careful when you enable it to be sure that all your configuration is correctly done because you may lose connectivity to the Switch. Also, it is required that you always have a backup port on the switch which you can use it to access to it in case you lose connectivity to it from the VLAN filtering

We are done on SW2.

Now we need to configure the VLAN’s on SW1 using SwOS

Here you have to say which port is trunk and which one accesses and using which VLAN. Don’t forget to click on Apply All. Then you go to VLANs and say: VLAN20 for Ether1 and Ether2 VLAN30 for Ether1 and Ether3

Let’s see if R2 will receive IP addresses from R1 DHCP servers:

Here we go!!!! Indeed they have received the right IP addresses as we want, so that means VLAN’s are working properly using the SwOS.

Final word Thanks for taking the time to read my book about MikroTik Switching. I hope that my book has helped you to understand features that are available on MikroTik Switching, and I could make you prepared for the MikroTik MTCSWE course exam. I hope that you can leave your review there so other readers can know about my book. Please keep an eye on my website https://mynetworktraining.com where I host many online courses for MikroTik but also for other vendors. If you wish to stay in contact with me, you can follow on my social media contacts: https://www.facebook.com/MaherHaddadOfficial https://www.facebook.com/groups/mynetworktraining https://twitter.com/mynetraining https://www.youtube.com/maictconsult If you have any question or suggestion, you are always welcome to write an email: Email: [email protected] Resource: https://wiki.mikrotik.com/

ABOUT THE AUTHOR Maher Haddad is an author of many online courses covering different vendors such as MikroTik, Cisco, Juniper, Ubiquiti, Huawei, and LigoWave. Up to the beginning of the year 2021, Maher has more than 50 thousand students enrolled in his online courses from different countries in the world. Additionally, Maher is an authorized Cisco instructor as well as an authorized LigoWave Trainer. He has his own online School which is My Network Training (mynetworktraining.com ) in which he hosts all his online courses. Moreover, Maher is a consultant for many ISPs in Europe, USA, Canada and Australia. In his spare time, Maher enjoys being with his family, swimming, bicycling and walking.