Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions _posts/2014-07-01-the-need-for-network-overlays-part-i.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ redirect_from: "/2014/07/01/the-need-for-network-overlays-part-i/"
---
The IT industry has gained significant efficiency and flexibility as a direct result of virtualization. Organizations are moving toward a virtual datacenter model, and flexibility, speed, scale and automation are central to their success. While compute, memory resources and operating systems were successfully virtualized in the last decade, primarily due to the x86 server architecture, networks and network services have not kept pace.

### The traditional solution: VLANs
## The traditional solution: VLANs

Way before the era of server virtualization, Virtual LANs (or 802.1q VLANs) were used to partition different logical networks (or broadcast domains) over the same physical fabric. Instead of wiring a separate physical infrastructure for each group, VLANs were used efficiently to isolate the traffic from different groups or applications based on the business needs, with a unique identifier allocated to each logical network. For years, a physical server represented one end-point from the network perspective and was attached to an “access” (i.e., untagged) port in the network switch. The access switch was responsible to enforce the VLAN ID as well as other security and network settings (e.g., quality of service). The VLAN ID is a 12-bit field allowing a theoretical limit of 4096 unique logical networks. In practice though, most switch vendors support much lower number to be configured. You should remember that for each active VLAN in a switch, a VLAN database need to be maintained for proper mapping of the physical interfaces and the MAC addresses associated with the VLAN. Furthermore, some vendors would also create a different spanning-tree (STP) instance for each active VLAN on the switch which require additional memory cycles.

Expand All @@ -27,7 +27,7 @@ VLANs are a perfect solution for small-scale environments, where the number of e
In a virtualized world, where the number of end-points is constantly increasing and can be very high, VLANs is a limited solution that does not follow one of the main participles beyond virtualization: use of software application to divide one physical resource into multiple isolated virtual environments. Yes, VLANs does offer segmentation of different logical networks (or broadcast domains) over the same physical fabric, but you still need to manually provision the network and make sure the VLANs are properly configured across the network devices. This start to become a management and configuration nightmare and simply does not scale.


### Where network vendors started to be (really) creative
## Where network vendors started to be (really) creative

At this point, when there was no doubt that VLANs and traditional L2 based networks are not suitable for large virtualized environments, plenty of network solutions were raised. I don’t really want to go into detail on any of those, but you can look for 802.1Qbg, VM-FEX, FabricPath, TRILL, 802.1ad (QinQ), and 802.1ah (PBB) to name a few. In my view, these are over complicating the network while ignoring the main problem – L2-based solution is a bad thing to begin with, and we should have looked for something completely different (hint: L3 routing is your friend).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ One of the trickiest things with IPv6 though is the fact that it’s pretty diff

In this post, I want to highlight the address assignment options available with IPv6, which is in my view one of the most fundamental things in IP networking, and where things are pretty different comparing to IPv4. I am going to assume you have some basic background on IPv6, and while I will cover the theory part I will also show the command line interface and demonstrate some of the configuration options, focusing on SLAAC and stateless DHCPv6. I am going to use a simple topology with two Cisco routers directly connected to each other using their GigabitEthernet 1/0 interface. Both routers are running IOS 15.2(4).

### Let the party started
## Let the party started

With IPv6 an interface can have multiple prefixes and IP addresses, and unlike IPv4, all of them are primary. All interfaces will have a Link-Local address which is the address used to implement many of the control plane functions. If you don’t manually set the Link-Local address, one will automatically be generated for you. Note that the IPv6 protocol stack will not become operational on an interface until a Link-Local address was assigned or generated and it passed Duplicate Address Detection (DAD) verification. In Cisco IOS, we will first need to enable IPv6 on the router which is done globally using the _ipv6 unicast-routing_ command. We will then enable IPv6 on the interface using the _ipv6 enable_ command:

Expand Down
2 changes: 1 addition & 1 deletion _posts/2014-11-30-the-need-for-network-overlays-part-ii.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ redirect_from: "/2014/11/30/the-need-for-network-overlays-part-ii/"
---
In the [previous post](/2014/07/01/the-need-for-network-overlays-part-i/), I covered some of the basic concepts behind network overlays, primarily highlighting the need to move into a more robust, L3 based, network environments. In this post I would like to cover network overlays in more detail, going over the different encapsulation options and highlighting some of the key points to consider when deploying an overlay-based solution.

### Underlying fabric considerations
## Underlying fabric considerations

While network overlays give you the impression that networks are suddenly all virtualized, we still need to consider the physical underlying network. No matter what overlay solution you might pick, it’s still going to be the job of the underlying transport network to switch or route the traffic from source to destination (and vice versa).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ LAG can be configured as either static (manually) or dynamic by using a protocol

![LAG 1]({{ site.baseurl }}/assets/lag-blog-1.png)

### Wait... LAG, bond, bundle, team, trunk, EtherChannel, Port Channel?
## Wait... LAG, bond, bundle, team, trunk, EtherChannel, Port Channel?

Let’s clear this right away - there are several acronyms used to describe LAG which are sometimes used interchangeably. While LAG is the standard name defined by the IEEE specification, different vendors and operating systems came up with their own implementation and terminology. Bond, for example, is really known on Linux-based systems, following the name of the [kernel driver](http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding). Team (or NIC teaming) is also pretty common across [Windows](https://technet.microsoft.com/en-us/library/hh831648.aspx) systems, and lately [Linux](https://fedoraproject.org/wiki/Features/TeamDriver) systems as well. EtherChannel is one of the famous terms, being used on [Cisco’s IOS](http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3550/software/release/12-2_44_se/configuration/guide/3550SCG/swethchl.html). Interesting enough, Cisco have changed the term in their IOS-XR software to [bundles](http://www.cisco.com/c/en/us/td/docs/routers/crs/software/crs_r4-0/interfaces/configuration/guide/hc40crsbook/hc40lbun.pdf), and in their NX-OS systems to [Port Channels](http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/5_x/nx-os/interfaces/configuration/guide/if_cli/if_portchannel.html). Oh... I love the standardization out there!

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,5 @@ comments_id: 8
permalink: "/blog/2015/05/11/whats-coming-in-openstack-networking-for-the-kilo-release/"
redirect_from: "/2015/05/11/whats-coming-in-openstack-networking-for-the-kilo-release/"
---
A post I wrote for the Red Hat Stack blog on whats coming in OpenStack Networking for the Kilo release. Check it out [here](https://www.redhat.com/en/blog/whats-coming-openstack-networking-kilo-release).
A post I wrote for the Red Hat Stack blog on [what's coming in OpenStack Networking for the Kilo release](https://www.redhat.com/en/blog/whats-coming-openstack-networking-kilo-release).

Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@ comments_id: 9
permalink: "/blog/2015/06/17/openstack-networking-with-neutron-what-plugin-should-i-deploy/"
redirect_from: "/2015/06/17/openstack-networking-with-neutron-what-plugin-should-i-deploy/"
---
_(This is a summary version of a talk I gave at OpenStack Israel event on June 15th, 2015. Slides are available_ [_here_](https://github.com/nyechiel/presentation-slides/blob/master/20150629%20-%20Cloud%20Native%20Day%20Tel%20Aviv%20-%20OpenStack%20Networking%20with%20Neutron:%20What%20Plugin%20Should%20I%20Deploy.pdf)).
_(This is a summary version of a talk I gave at OpenStack Israel event on June 15th, 2015. [Slides are available on GitHub](https://github.com/nyechiel/presentation-slides/blob/master/20150629%20-%20Cloud%20Native%20Day%20Tel%20Aviv%20-%20OpenStack%20Networking%20with%20Neutron:%20What%20Plugin%20Should%20I%20Deploy.pdf))._

Neutron is probably one of the most pluggable projects in OpenStack today. The theory is very simple and goes like this: Neutron is providing just an API layer and you have got to choose the backend implementation you want. But in reality, there are plenty of plugins (or drivers) to choose from and the plugin architecture is not always so clear.

The plugin is a critical piece of the deployment and directly affects the feature set you are going to get, as well as the scale, performance, high availability, and supported network topologies. In addition, different plugins offer different approaches for managing and operating the networks.

### So what is a Neutron plugin?
## So what is a Neutron plugin?

The Neutron API exposed via the Neutron server is splitted into two buckets: the core (L2) API and the API extensions. While the core API consists only of the fundamental Neutron definitions (Network, Subnet, Port), the API extension is where the interesting stuff get to be defined, and where you can deal with constructs like L3 router, provider networks, or L4-L7 services such as FWaaS, LBaaS or VPNaaS.

Expand All @@ -37,7 +37,7 @@ With the hardware centric ones, the assumption is that a dedicated network hardw

### And what is there by default?

There are efforts in the Neutron community to completely separate the API (or control-plane components) from the plugin or actual implementation. The vision is to position Neutron as a platform, and not as any specific implementation. That being said, Neutron was really developed out of the Open vSwitch plugin, and some good amount of the upstream development today is still focused around that. Open vSwitch (with the OVS ML2 driver) is what you get by default, and this is by far the most common plugin deployed in production (see the recent user survey [here](http://superuser.openstack.org/articles/openstack-users-share-how-their-deployments-stack-up)). This solution is not perfect and has pros and cons like any other of the solutions out there.
There are efforts in the Neutron community to completely separate the API (or control-plane components) from the plugin or actual implementation. The vision is to position Neutron as a platform, and not as any specific implementation. That being said, Neutron was really developed out of the Open vSwitch plugin, and some good amount of the upstream development today is still focused around that. Open vSwitch (with the OVS ML2 driver) is what you get by default, and this is by far the most common plugin deployed in production (see the [recent user survey](http://superuser.openstack.org/articles/openstack-users-share-how-their-deployments-stack-up)). This solution is not perfect and has pros and cons like any other of the solutions out there.

While Open vSwitch is used on the Compute nodes to provide connectivity for VM instances, some of the key components with this solution are actually not related to Open vSwitch. L3 routing, DHCP, and other services are implemented using dedicated software agents using Linux tools such as network namespaces (ip netns), dnsmasq, or iptables.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ redirect_from: "/2015/06/23/ipv6-prefix-delegation-what-is-it-and-how-does-it-go
---
IPv6 offers several ways to assign IP addresses to end hosts. Some of them (SLAAC, stateful DHCPv6, stateless DHCPv6) were already covered in [this post](/2014/07/02/ipv6-address-assignment-stateless-stateful-dhcp-oh-my/). The IPv6 Prefix Delegation mechanism (described in [RFC 3769](https://tools.ietf.org/html/rfc3769) and [RFC 3633](https://www.ietf.org/rfc/rfc3633.txt)) provides “a way of automatically configuring IPv6 prefixes and addresses on routers and hosts” - which sounds like yet another IP assignment option. How does it differ from the other methods? And why do we need it? Let’s try to figure it out.

### Understanding the problem
## Understanding the problem

I know that you still find it hard to believe… but IPv6 is here, and with IPv6 there are enough addresses. That means that we can finally design our networks properly and avoid using different kinds of network address translation (NAT) in different places across the network. Clean IPv6 design will use addresses from the Global Unicast Address (GUA) range, which are routable in the public Internet. Since these are globally routed, care needs to be taken to ensure that prefixes configured by one customer do not overlap with prefixes chosen by another.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ comments_id: 11
permalink: "/blog/2015/06/28/neutron-networking-with-red-hat-enterprise-linux-openstack-platform/"
redirect_from: "/2015/06/28/neutron-networking-with-red-hat-enterprise-linux-openstack-platform/"
---
_(This is a summary version of a talk I gave at_ [_Red Hat Summit_](http://www.redhat.com/summit/) _on June 25th, 2015. Slides are available_ [_here_](https://github.com/nyechiel/presentation-slides/blob/master/20150625%20-%20Red%20Hat%20Summit%202015%20-%20Neutron%20networking%20with%20Red%20Hat%20Enterprise%20Linux%20OpenStack%20Platform.pdf)).
_(This is a summary version of a talk I gave at_ [_Red Hat Summit_](http://www.redhat.com/summit/) _on June 25th, 2015. [Slides are available on GitHub](https://github.com/nyechiel/presentation-slides/blob/master/20150625%20-%20Red%20Hat%20Summit%202015%20-%20Neutron%20networking%20with%20Red%20Hat%20Enterprise%20Linux%20OpenStack%20Platform.pdf))._

I was honored to speak the second time in a row on Red Hat Summit, the premier open source technology event hosted in Boston this year. As I am now focusing on product management for networking in [Red Hat Enterprise Linux OpenStack Platform](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/) I presented Red Hat’s approach to Neutron, the OpenStack networking service.

Expand Down
12 changes: 7 additions & 5 deletions _posts/2015-12-31-hands-on-with-fedora-kvm-and-cumulus-vx.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ My goal is to build a four node leaf/spine topology. To form the fabric, each le

![base_topology]({{ site.baseurl }}/assets/base_topology1.png)

### Prerequisites
## Prerequisites

- Install KVM and related virtualization packages. I am running Fedora 22 and used ```yum groupinstall "Virtualization\*"``` to obtain the latest versions of libvirt, virt-manager, qemu-kvm and associated dependencies.

Expand Down Expand Up @@ -107,19 +107,21 @@ Before we log in to any of the newly created VMs, I first would like to verify t
Useful commands to use here are **brctl show** and **brctl showmacs**. For example, let’s examine the link between leaf1 and spine3 (note that libvirt based the MAC on the configured guest MAC address with high byte set to 0xFE):

> ```
> $ ip link show vnet1 | grep link link/ether fe:00:01:00:00:13 brd ff:ff:ff:ff:ff:ff
> $ ip link show vnet1 | grep link
> link/ether fe:00:01:00:00:13 brd ff:ff:ff:ff:ff:ff
> ```
>
> ```
> $ ip link show vnet10 | grep link link/ether fe:00:03:00:00:31 brd ff:ff:ff:ff:ff:ff
> $ ip link show vnet10 | grep link
> link/ether fe:00:03:00:00:31 brd ff:ff:ff:ff:ff:ff
> ```

> ```
> $ brctl show virbr1
> brctl show virbr1
> ```

> ```
> $ brctl showmacs virbr1
> brctl showmacs virbr1
> ```


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ comments_id: 16
permalink: "/blog/2016/01/04/nfv-and-open-networking-with-rhel-openstack-platfrom/"
redirect_from: "/2016/01/04/nfv-and-open-networking-with-rhel-openstack-platfrom/"
---
_(This is a summary version of a talk I gave at [Intel Israel Telecom and NFV event](http://www.telecomnews.co.il/%D7%A2%D7%AA%D7%99%D7%93-%D7%A2%D7%95%D7%9C%D7%9D-%D7%94%D7%A1%D7%9C%D7%95%D7%9C%D7%A8-%D7%9B%D7%A0%D7%A1-%D7%90%D7%99%D7%A0%D7%98%D7%9C-Intel-Israel-Telecom-NFV-event-2015.html) on December 2nd, 2015. Slides are available [here](https://github.com/nyechiel/presentation-slides/blob/master/20151202%20-%20Intel%20Israel%20Telecom%20Event%20-%20NFV%20and%20Open%20Networking.pdf)_).
_(This is a summary version of a talk I gave at [Intel Israel Telecom and NFV event](http://www.telecomnews.co.il/%D7%A2%D7%AA%D7%99%D7%93-%D7%A2%D7%95%D7%9C%D7%9D-%D7%94%D7%A1%D7%9C%D7%95%D7%9C%D7%A8-%D7%9B%D7%A0%D7%A1-%D7%90%D7%99%D7%A0%D7%98%D7%9C-Intel-Israel-Telecom-NFV-event-2015.html) on December 2nd, 2015. [Slides are available on GitHub](https://github.com/nyechiel/presentation-slides/blob/master/20151202%20-%20Intel%20Israel%20Telecom%20Event%20-%20NFV%20and%20Open%20Networking.pdf))._

I was honored to be invited to speak on a local Intel event about Red Hat and what we are doing in the [NFV](http://www.etsi.org/technologies-clusters/technologies/nfv) space. I only had 30 minutes, so I tried to provide a high level overview of our offering, covering some main points:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,12 @@ I recently attended the Red Hat Summit 2016 event that took place at San Francis

In this short post I wanted to highlight a few sessions which are relevant to networking and were presented during the event. While video recordings are not available, slide decks can be downloaded in a PDF format (links included below).

#### Software-defined networking (SDN) fundamentals for NFV, OpenStack, and containers
## Software-defined networking (SDN) fundamentals for NFV, OpenStack, and containers
- Session overview: With software-defined networking (SDN) gaining traction, administrators are faced with technologies that they need to integrate into their infrastructure. Red Hat Enterprise Linux offers a robust foundation for SDN implementations that are based on an open source standard technologies and designed for deploying containers, OpenStack, and network function virtualization (NFV). We'll dissect the technology stack involved in SDN and introduce the latest Red Hat Enterprise Linux options designed to address the packet processing requirements of virtual network functions (VNFs), such as Open vSwitch (OVS), single root I/O virtualization (SR-IOV), PCI Passthrough, and DPDK accelerated OVS.
- [Slides](https://rh2016.smarteventscloud.com/connect/fileDownload/session/6E629B9CBED8910321AEDD4BA6F18430/SS43514_Yechiel-SS43514_%20SDN%20fundamentals%20for%20NFV,%20OpenStack,%20and%20containers%20[Red%20Hat%20Summit%202016].pdf)


#### Use Linux on your whole rack with RDO and open networking
### Use Linux on your whole rack with RDO and open networking
- Session overview: OpenStack networking is never easy--each new release presents new challenges that are hard to keep up with. Come see how open networking using Linux can help simplify and standardize your RDO deployment. We will demonstrate spine/leaf topology basics, Layer-2 and Layer-3 trade-offs, and building your deployment in a virtual staging environment--all in Linux. Let us demystify your network.
- [Slides](https://rh2016.smarteventscloud.com/connect/fileDownload/session/40012FDDA9D26F0B47B9D109F63126E3/ssuehle-summit2016.pdf)

Expand Down
Loading