Cisco DNAC 1.2.6 MUltiSIte-Lab Guide-Jul2019 [PDF]

  • 0 0 0
  • Gefällt Ihnen dieses papier und der download? Sie können Ihre eigene PDF-Datei in wenigen Minuten kostenlos online veröffentlichen! Anmelden
Datei wird geladen, bitte warten...
Zitiervorschau

DNA Center 1.2.10 Software-Defined Access Multi-Site Lab Guide Version 1.00 09 July 2019

JULY 9, 2019 PRESENTED BY: CISCO’S SOLUTIONS READINESS ENGINEERING TEAM

P a g e |1

Contents Executive Summary ................................................................................................................... 4 SD-Access Overview .................................................................................................................. 6 SD-Access Fabric .................................................................................................................... 7 SD-Access Network Underlay ................................................................................................... 8 SD-Access Network Overlay ..................................................................................................... 8 SD-Access Policy..................................................................................................................... 9 SD-Access Segmentation ......................................................................................................... 9 SD-Access Fabric Wireless.......................................................................................................10 DNA Center Overview...............................................................................................................12 Software Defines Access Versioning .........................................................................................14 Introduction to SDA Distributed Campus ...................................................................................14 Fabric Site ............................................................................................................................14 Fabric Multi-Site....................................................................................................................15 Fabric Control Plane state distribution ......................................................................................18 About this Lab.......................................................................................................................19 Lab Topology (Simplified View) ................................................................................................20 Physical Topology ..................................................................................................................22 Addresses and Credentials ......................................................................................................23 Exercise 1: Introduction to DNA Center 1.2.10 .............................................................................24 DNA Center Package Management ..........................................................................................29 Exercise 2: Reviewing the pre-deployed Fabric SITE 1 ...................................................................32 Sites and Buildings – Network Hierarchy Tab .............................................................................33 Validate Shared Common Servers – Network Settings Tab (Part 1) ................................................34 Device Credentials – Network Settings Tab (Part 2).....................................................................37 IP Address Pools – Network Settings Tab (Part 3)........................................................................39 About Global and Site-Specific Network Settings ........................................................................39 About ISE and DNAC Integration ..............................................................................................41 Exercise 3: Using the DNA Center Discovery Tool..........................................................................45 Exercise 4: Using the DNA Center Inventory Tool..........................................................................54 Exercise 5: Using the DNA Center Design Application....................................................................57 Creating Sites and Buildings – Network Hierarchy Tab.................................................................57 Configuring Network Settings..................................................................................................61

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e |2 Exercise 6: Using the DNA Center Policy Application .....................................................................64 Creating VNs and Binding SGTs to VNs......................................................................................65 Exercise 7: Using the DNA Center Provision Application ................................................................69 About Provisioning ................................................................................................................69 Assigning Devices to a Site – Provisioning Step 1 ........................................................................70 Exercise 8: Provision Devices to a Site .........................................................................................77 Exercise 9: Reserve IP Address Pool ............................................................................................87 About Network Settings Inheritance .........................................................................................87 Exercise 10: Creating the Fabric Overlay ......................................................................................93 Identify and create Transits.....................................................................................................93 Concept of SDA- Transit and Transit Control Plane ......................................................................96 Creating the Fabric Domain (Fabric SITE 2) ................................................................................99 Fabric Sites and Fabric Domains .....................................................................................99 Multi-Site Fabric Domain .................................................................................................99 About Software-Defined Access Validation Feature in DNA Center .............................................. 103 About Pre-Verification ...................................................................................................... 104 About Verification (Post-Verification).................................................................................. 104 Adding Devices to the Fabric ................................................................................................. 105 Decoding the Fabric Topology Map..................................................................................... 105 Fabric Provisioning Options ............................................................................................... 106 Exercise 11: DNA Center Host Onboarding................................................................................. 120 Host Onboarding Part 1........................................................................................................ 120 Host Onboarding Part 2 (for CAMPUS VN) ............................................................................... 123 Exercise 12: Exploring Provisioned Configuration ....................................................................... 129 Exploring LISP Configurations – CLI......................................................................................... 130 Exploring BGP Configurations – CLI ........................................................................................ 131 Exploring BGP Routes – CLI ................................................................................................... 133 Exercise 13: Fusion Routers and Configuring Fusion Internal Router ............................................. 135 Border Automation & Fusion Router Configuration Variations – BorderNode and FusionInternal ..... 136 Creating Layer-3 Connectivity................................................................................................ 137 About Route Distinguishers (RD) ............................................................................................ 138 About Route Targets (RT)...................................................................................................... 138 Putting It All Together.......................................................................................................... 139

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e |3 Extending the VRFs to the Fusion Internal ............................................................................... 143 Optional - Extending the VRFs – Verification ............................................................................ 147 Use VRF Leaking to Share Routes on FusionInternal .................................................................. 154 Use VRF Leaking to Share Routes and Advertise to BorderNode .................................................. 156 About Route Leaking ........................................................................................................ 156 Route Leaking – Validation (Control-Border Nodes) .................................................................. 158 Optional - Route Leaking – Validation (Edge Nodes) ................................................................. 159 Exercise 14: Exploring Transit Control Plane Configuration .......................................................... 161 Exercise 15: Testing the Inter-Fabric Sites connectivity................................................................ 163

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e |4

Executive Summary Digital transformation is creating new opportunities in every industry. In healthcare, doctors are now able to monitor patients remotely and leverage medical analytics to predict health issues. In education, technology is enabling a connected campus and a more personalized, equal access to learning resources. In retail, shops are able to provide an omnichannel experience by being able to engage online and instore, with location awareness. In today’s world, digital transformation is absolutely necessary for businesses to stay relevant. For any organization to successfully transition to a digital world, they must invest in their network. It’s the network that connects all things and is the cornerstone where digital success is realized or lost. It is the pathway for productivity and collaboration and an enabler of improved end-user experience. It is also the first line of defense to securing enterprise assets and intellectual property. Software-Defined Access is the industry’s first intent-based networking solution for the enterprise. An intent-based network treats the network as a single system that provides the translation and validation of the business intent (or goals) into the network and returns actionable insights.

SD-Access provides automated end-to-end services (such as segmentation, quality of service, analytics, and so on) for user, device, and application traffic. SD-Access automates user policy, so organizations can ensure the appropriate access control and application experience is set for any user or device to any application across the network. This is accomplished with a single network fabric across LAN and WLAN which creates a consistent user experience, anywhere, without compromising on security.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e |5

SD-Access Benefits • Automation: Consistent management of wired and wireless network provisioning and policy. • Policy: Automated network segmentation and group-based policy. • Assurance: Contextual insights for fast issue resolution and capacity planning. • Integration: Open and programmable interfaces for integration with third-party solutions.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e |6

SD-Access Overview Cisco's Software-Defined Access (or SD-Access) solution is a programmable network architecture that provides software-based policy and segmentation from the edge of the network to the applications. SD-Access is implemented via Cisco Digital Network Architecture Center (DNA Center) which provides design settings, policy definition and automated provisioning of the network elements, as well as assurance analytics for an intelligent wired and wireless network. In an enterprise architecture, the network may span multiple locations (or sites) such as a main campus, remote branches, and so on, each with multiple devices, services, and policies. The Cisco SD-Access solution offers an end-to-end architecture that ensures consistency in terms of connectivity, segmentation, and policy across different locations (sites). These can be described as two main layers: • SD-Access Fabric: the physical and logical network forwarding infrastructure. • DNA Center: the automation, policy, assurance and integration infrastructure.

Let’s discuss each of the major SD-Access solution components, we will cover the additional details going further through the lab.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e |7

SD-Access Fabric The complexity in today’s network comes from the fact that policies are tied to network constructs such as IP addresses, VLANs, ACLs, etc. What if the enterprise network could be divided into two different layers, for different objectives? One layer dedicated to the physical devices and forwarding of traffic (known as an underlay), and another entirely virtual layer (known as an overlay), where wired and wireless users and devices are logically connected together, and services and policies are applied. This provides a clear separation of responsibilities and maximizes the capabilities of each sublayer. This approach would dramatically simplify deployment and operations because a change of policy would only affect the overlay, and the underlay would not be touched. The combination of an underlay and an overlay is called a "network fabric." The concepts of overlay and fabric are not new in the networking industry. Existing technologies such as MPLS, GRE, LISP, OTV, etc. are all examples of network tunneling technologies which implement an overlay. Another common example is Cisco Unified Wireless Network (CUWN), which uses CAPWAP to create an overlay network for wireless clients.

So, what is unique about the SD-Access Fabric? Let's start by defining the key SD- Access components.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e |8

SD-Access Network Underlay The SD-Access network underlay (or simply: underlay) is comprised of the physical network devices, such as routers, switches, and wireless LAN controllers (WLCs) plus a traditional Layer 3 routing protocol. This provides a simple, scalable and resilient foundation for communication between the network devices. The network underlay is not used for client traffic (client traffic uses the fabric overlay). All network elements of the underlay must establish IP connectivity between each other. This means an existing IP network can be leveraged as the network underlay. Although any topology and routing protocol could be used in the underlay, the implementation of a well-designed Layer 3 access topology is highly recommended to ensure consistent performance, scalability, and high availability. This eliminates the need for STP, VTP, HSRP, VRRP, etc. In addition, running a logical fabric topology on top of a prescriptive network underlay provides built-in functionality for multi-pathing, optimized convergence, and so on, and simplifies the deployment, troubleshooting, and management of the network. DNA Center provides a prescriptive LAN automation service to automatically discover, provision, and deploy network devices according to Cisco Validated Design best practices. Once discovered, the automated underlay provisioning leverages Plug and Play (PnP) to apply the required protocol and IP address configurations. The DNA Center LAN Automation uses a best practice IS-IS routed access design. The main reasons for IS-IS are: • IS-IS is protocol agnostic, so it works with IPv4 and IPv6 addresses • IS-IS can work with only Loopback interfaces, and doesn't require an address on each L3 link • IS-IS supports an extensible TLV format for emerging use cases.

SD-Access Network Overlay The SD-Access fabric overlay (or simply: overlay) is the logical, virtualized topology built on top of the physical underlay. As described earlier, this requires several additional technologies to operate. SD-Access fabric overlay has 3 main building blocks: • Fabric data plane: the logical overlay is created by packet encapsulation using Virtual Extensible LAN (VXLAN), with Group Policy Option (GPO). • Fabric control plane: the logical mapping and resolving of users and devices (associated with VXLAN tunnel endpoints) is performed by Locator/ID Separation Protocol (LISP). • Fabric policy plane: where the business intent is translated into a network policy, using addressagnostic Scalable Group Tags (SGT) and group-based policies. VXLAN-GPO provides several advantages for SD-Access, such as support for both Layer2 and Layer 3 virtual topologies (overlays), and the ability to operate over any IP-based network with built-in network segmentation (VRF/VN) and group-based policy (SGT).

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e |9

SD-Access Policy A fundamental benefit of SD-Access is the ability to instantiate logical network policy, based on services offered by the fabric. Some examples of services that the solution offers are the following: • Security segmentation services • Quality of Service (QoS) • Capture/Copy services • Application visibility services These services are offered across the entire fabric independently of the device-specific address or location.

SD-Access Segmentation Segmentation is a method or technology used to separate specific groups of users or devices from other groups for the purpose of security, overlapping IP subnets, etc. In SD-Access fabric, VXLAN data-plane encapsulation provides network segmentation by using the VNI (Virtual Network Identifier) and Sca lable Group Tag (SGT) fields in its header. SD-Access fabric provides a simple way to implement hierarchical network segmentation: macro segmentation and micro segmentation. Macro segmentation: logically separating a network topology into smaller virtual networks, using a unique network identifier and separate forwarding tables. This is instantiated as a virtual routing & forwarding (VRF) instance and referred to as a Virtual Network. A Virtual Network (VN) is a logical network instance within the SD-Access fabric, providing Layer 2 or Layer 3 services and defining a Layer 3 routing domain. The VXLAN VNI is used to provide both the Layer2 and Layer 3 Segmentation. Micro segmentation: logically separating user or device groups within a VN, by enforcing source to destination access control permissions. This is commonly instantiated using access control lists (ACL), also known as an access control policy. A Scalable Group is a logical object ID assigned to a “group” of users and/or devices in the SD-Access fabric and used as the source and destination classifier in Scalable Group ACLs (SGACLs). The SGT is used to provide address-agnostic group-based policies.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 10

SD-Access Fabric Wireless Administrators may notice that a traditional Cisco Unified Wireless Network (CUWN) provides some of the same advantages of SD-Access. For example, here are some of the common attributes: • Tunneled overlay network (via CAPWAP encapsulation and separate control plane) • Some levels of infrastructure automation (e.g. AP management, configuration management, etc.) • Simple wireless user or device mobility (also known as client roaming). • Centralized management controller (WLC). But the CUWN approach comes with some tradeoffs: • Only wireless users can benefit from the CAPWAP overlay and does not apply to wired users. • Wireless traffic must be tunneled to a centralized anchor point, which may not be optimal for many applications. In addition, there are several advantages unique to wired users: • Wired users can benefit from the performance and scalability that a distributed switching data plane provides. • Wired users also benefit from advanced QoS and innovative services such as Encrypted Traffic Analytics (ETA), available in the switching infrastructure. In other words, each domain (wired and wireless), has different advantages. So, what is unique about the SD-Access wireless? SD-Access fabric provides the best of the distributed wired and centralized wireless architectures by providing a common overlay and extending the benefits to both wired and wireless users. Finally, with SD-Access fabric, customers can have a common policy and one unified experience for all their users independently of the access media.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 11

SD-Access Capabilities Supported in DNA Center 1.2.10 SD-Access (SDA) is Cisco’s Next Generation Enterprise Networking Access Solution that is designed to offer integrated security, segmentation, and elastic service roll-outs via a Fabric-based Infrastructure, and an outstanding GUI experience for automated network provisioning via the new DNA Center (DNAC) Application. SD-Access version 1.1 offers the following primary features:

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 12

DNA Center Overview DNA Center is a centralized operations platform for end-to-end automation and assurance of enterprise LAN, WLAN, and WAN environments, as well as orchestration with external solutions and domains. It provides the IT operator with intuitive automation and assurance workflows that make it easy to design network settings and policies, and then provision and assurance the network and policies along with end-to-end visibility, proactive monitoring and insights to provide consistent and high-quality user experience.

Architecture Tenets DNA Center has been designed to scale to the needs of the largest enterprise network deployments. It consists of both a network controller and a data analytics functional stack to provide the user with a unified platform for managing and automating their network. DNA Center has been built using a microservices architecture that is scalable and allows for continuous delivery and deployment. Some of the key highlights of DNA Center include: • Horizontal scale by adding more DNA Center nodes to an existing cluster • High availability – for both hardware component and software packages • Backup and restore mechanism – to support disaster recovery scenarios. • Role-based access control mechanism, for differentiated access to users based on roles and scope. • Programmable interfaces to enable ISVs, ecosystem partners and developers to integrate with DNA Center.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 13 DNA Center is cloud-tethered to enable the seamless upgrade of existing functions and additions of new packages and applications without having to manually download and install them.

Software-Defined Access vs DNA Center DNA Center is the physical appliance running Applications and Tools. Software-Defined Access (SDA) is just one of the solutions provided by the DNA Center appliance. The difference is very subtle, but it is critical to understand, particularly when referring to version numbering. Another solution provided by DNA Center is Assurance. Additional solutions on the roadmap include Cisco SD-WAN (Viptela) and Wide Area Bonjour. While each of these solutions provided by DNA Center are tightly integrated, some can run in isolation. A DNA Center appliance can be deployed for Assurance (Analytics) and/or deployed for Software-Defined Access (Automation). This is accomplished by the installation / uninstallation of certain packages as listed in the release notes.

Note: If Automation and Assurance are deployed in this isolated manner, these two deployments would not coexist in the same network. They would be independent and separate networks with no knowledge of the other. Unlike DNA Center 1.0 where Automation and Assurance was a two-box solution, a DNA Center 1.2 deployment utilizing both solutions must have them both installed on the same appliance.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 14

Software Defines Access Versioning The SDA solution version is a combination of Cisco DNA Center controller version, Device Platform (IOS, IOS XE, NX-OS, and AireOS) version, and the version of Cisco ISE. The SDA version and the DNA Center version are not the same thing. This means there are different device compatibility specifications with DNA Center (for Assurance) and Software-Defined Access (for Automation). When upgrading any SDA components, including DNA Center controller version, device platform version, and ISE version, it is critically important to pay attention to the versions required to maintain compatibility between all the components. The SD-Access Product Compatibility matrix – located here – will indicate the compatible DNA Center, device platform, and ISE versions. At the time this lab guide was created, the SDA version is SDA 1.2 (update 1.2.10). This indicates that SDA is running on DNA Center version 1.2.10. Now, we have introduced two separate DNA Center code trains – 1.1.x and 1.2.x – with 1.2.x bringing new features to the platform.

Introduction to SDA Distributed Campus There are multiple deployment options exist for SD-Access fabric. • •

Fabric Site: A single fabric contained within a single site. Fabric Multi-Site: A common fabric across multiple sites.

Fabric Site A fabric “site” is a portion of the fabric which has its own set of Control Plane Nodes, Border Nodes, and Edge Nodes. Key characteristics of a single fabric site are: • A given IP subnet is part of a single fabric site. • L2 Extension is only within a fabric. • L2 / L3 mobility is only within a fabric. • No context translation is necessary within a fabric. A fabric site is, in principle, autonomous from other fabric sites from the connectivity perspective.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 15 The diagram below depicts a fabric site.

Fabric Multi-Site An SD-Access fabric may be composed of multiple sites. Each site may require different aspects of scale, resiliency, and survivability. The overall aggregation of sites (i.e. the fabric) must also be able to accommodate a very large number of endpoints, scale horizontally by aggregating sites, and having the local state be contained within each site. Multiple fabric sites corresponding to a single fabric will be interconnected by a Transit Network area. The Transit Network area may be defined as a portion of the fabric that has its own control plane nodes and border nodes but does not have edge nodes. Furthermore, the Transit Network area shares at least one border node from each fabric site that it interconnects. The following diagram depicts a multi-site fabric.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 16

Role of the Transit Network area: In general terms, a Transit Network area exists to connect to the external world. There are several approaches to external connectivity, such as: • Traditional IP network • Traditional Wide Area Network (WAN) • Software-Defined WAN (SD-WAN) • SD-Access (native) In the fabric multi-site model, all external connectivity (including Internet access) is modeled as a Transit Network. This creates a general construct that allows connectivity to any other sites and/or services. The traffic across fabric sites, and to any other type of site, uses the control plane and data plane of the transit network to provide connectivity between these networks. A local border node is the handoff point from the fabric site, and the traffic is delivered across the transit network to other sites. The transit network may use additional SD-Access Fabric features. For example, if the transit network is a WAN, then features like performance routing may also be used.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 17

To provide end-to-end policy and segmentation, the transit network should be capable of carrying the endpoint context information (VRF, SGT) across this network. Otherwise, a re-classification of the traffic will be needed at the destination site Border.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 18

Fabric Control Plane state distribution The local control plane in a fabric site will only hold state relevant to endpoints that are connected to edge nodes within the local fabric site. The local endpoints will be registered to the local control plane by the local edge devices, as with a single fabric site. Any endpoint that isn’t explicitly registered with the local control plane will be assumed to be reachable via the border nodes connected to the transit area. At no point should the local control plane for a fabric site hold state for endpoints attached in other fabric sites (i.e. the border nodes do not register information from the transit area.) This allows the local control plane to be independent of other fabric sites, thus enhancing the overall scalability of the solution. The control plane in the transit area will hold summary state for all fabric sites that it interconnects. This information will be registered to the Transit Area Control Plane by the border nodes from the different fabric sites. The border nodes register EID (Endpoint IDs) information from their local fabric site into the Transit Network Control Plane for summary EIDs only, further improving overall scalability. NOTE: It is important to note that endpoint roaming is only within a local fabric site, and not across sites.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 19

About this Lab This lab begins with the switching, routing, underlay and Fabric SITE 1 already provisioned, and with DNA Center and ISE v2.3 installed and bootstrapped with IP addresses, NTP, and network connectivity. ISE has authentication and authorization policies in place. This guide will walk users through the creating the Fabric SITE 2, adding both the sites in Single Fabric Domain hence manage by one centralized DNAC while following through the major used cases of Automation Solution in DNA Center 1.2.10. Lab Exercises: • Exercise 1: Introduction to DNA Center 1.2.10 - The lab begins with a quick overview of the DNA Center dashboard of applications and tools. This tour will help to bring familiarity with navigating the appliance. • Exercise 2: Reviewing the pre-deployed Fabric SITE 1 – Walking through the configuration

and policies configured on all the fabric nodes by DNAC. Along with integration of DNA Center with the Identity Services Engine (ISE). • • •



• • • • • • • • •

Exercise 3: Using the DNA Center Discovery Tool - The DNA Center Discovery tool will be used to discover and view the underlay devices. Exercise 4: Using the DNA Center Inventory Tool - The Inventory tool will be used to help lay out the topology maps for later use in the lab. Exercise 5: Using the DNA Center Design Application - Sites will be created using the Design Application. Here, common attributes, resources, and credentials are defined for re-use during various DNA Center workflows. Exercise 6: Using the DNA Center Policy Application - Virtual Networks are then created in the Policy Application, thus creating network-level segmentation. During this step, groups (SGTs) learned from ISE will be associated with the Virtual Networks, creating micro-level segmentation. Exercise 7: Using the DNA Center Provision Application - Discovered devices will be provisioned to the Site created in the Design Application. Exercise 8: Provisioning Devices to a Site - Discovered devices will be provisioned to the Site created in the Design Application. Exercise 9. Reserving IP Pool - IP pools will be reserved for the Fabric SITE 1 Exercise10: Creating the Fabric Overlay - The overlay fabric will be provisioned. Exercise 11: DNA Center Host Onboarding - The host onboarding provisioning will be completed. Exercise 12: Exploring DNA Center Provisioned Configurations - The Border Node’s provisioned configuration will be rigorously explored with the CLI. Exercise 13: Fusion Routers and Configuring Fusion Internal Router - Fusion routers will be discussed in detail, and Fusion Internal will be configured. Exercise 14. Exploring Transit Control Plane Configuration - Control Plane node of the Transit Area, responsible for communication between Fabric SITE 1 and Fabric SITE 2 Exercise 15: Testing the Inter-Fabric Sites connectivity – Connectivity between Fabric SITE 1

and Fabric SITE 2 will be tested.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 20

Lab Topology (Simplified View) The core of the network is a Catalyst 3850 (copper) switch called the LabAccessSwitch (numbered as #1 in the Physical Topology diagram below). It is the intermediate node between the other devices that will ultimately become part of the Software-Defined Access Fabric. It is directly connected to a Catalyst 9300 (#2) and another Catalyst 9300 (#3) switch. These will both act as edge nodes in the respective Fabric SITES 1 & 2. The LabAccessSwitch (#1) is also directly connected to a pair of Catalyst 3850 (fiber) switches (#4 and #5) that will act as control plane & border plane nodes (co-located) in the Fabric SITE 1 CPN-BN (#4) & Fabric SITE 2 CPN-BN (#5). LabAccessSwitch (#1) is directly connected to the segment that provides access to DNA Center, ISE, the Jump Host (not pictured) and is the default gateway for all of these management devices. ISR-4451, FusionInternal (#6) in Fabric SITE 2, that acts a fusion router for the shared services. This internal fusion router provides network access to the WLC-3504 and the DHCP/DNS server which is a Windows Server 2012 R2 virtual machine. The Transit between Fabric SITE 1 & Fabric SITE 2 is another ISR4451 directly connected to LabAccessSwitch (#1) is the TransitControlPlane (#7), responsible for maintaining the database and take the control decisions for inter-fabric sites communication. Two Windows 7 machines act as the host machines in the network. The Windows 7 machines are connected to GigabitEthernet 1/0/23 on EdgeNode1 (#2) and EdgeNode2 (#3).

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 21

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 22

Physical Topology

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 23

Addresses and Credentials The table below provides the access information for the devices within a given pod. Usernames and password are case sensitive.

Note: IP Connectivity in the underlay network has been preconfigured using the IS-IS routing. The devices will be referred to as routers regardless of whether they are Layer-3 switches or truly Integrated Services Routers (ISRs). When referring to the SDA role of a router, such as border/control node, lower case letters will be used with spacing between the words. When referring to a specific device, such as CP-BN_FS1/CP-BN_FS2, the hostname of the device, as it appears on the CLI, will be used. The control plane nodes may also be referred to as Map Servers and/or Map Resolvers. The terms EID-prefix and EID-space may be used interchangeably. This refers to the prefix space used by the end -hosts in the lab. This includes 172.16.101.0/24, 172.16.201.0/24, and 172.16.151.0/24. End-hosts refer to the Windows virtual machines directly connected to EdgeNode1 and EdgeNode2. The may also be referred to as end-points. DNA Center uses slightly different terminology to refer to existing technologies. VR Fs are referred to as Virtual Networks (VNs). TrustSec Security Group Tags (SGTs) are referred to as Scalable Group Tags. The DNA Center terminology and common technology names may be used interchangeably.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 24 In the context of this lab guide, the generic te rm fabric refers to an overlay network. The specific term Fabric – or The Fabric – refers to the SDA-Overlay (LISP overlay) that is created during the lab exercises. . When referring to LISP encapsulation or LISP data plane encapsulation in the context of SDA, this is referring to the VXLAN GPO encapsulation. For brevity and clarity, this may be referred to as VXLAN encapsulation or LISP (VXLAN) in the guide. However, this is not meant to indicate that the encapsulation method is the same VXLAN as RFC 7348.

Exercise 1: Introduction to DNA Center 1.2.10 Step 1. Open the browser to DNA Center using the management IP address https://192.168.100.10 login with the following credentials.

Username: admin Password: DNACisco!

NOTE: DNA Center’s login on screen is dynamic. It may have a different background. When first accessing the login page, the browser may appear to freeze for up to approximately twenty seconds. This is directly related to our lab environment and will not happen in production. It does not impact any configuration or actual performance of DNA Center. DNA Center’s SSL certificate may not be automatically accepted by your browser. If this occurs, use the advanced settings to allow the connection.

Step 2.

Once logged in, the DNA Center dashboard is displayed.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 25 Please note that for this lab have pre-deployed Fabric Site 1 hence the DNAC dashboard shows the Overall Health Summary, Network Sites and discovered Network Devices on the main page.

Step 3. To view the DNA Center version, click on the wheel at the top right and then select About. Notice the DNA Center Controller version 1.2.10

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 26

Step 4. Press Show Packages to view the various packages that make up the DNA Center 1.2.10. In addition to this, we can navigate to the view the Release Notes

Step 5. The DNA Center main screen is divided into two main areas, Applications and Tools. Applications are the top half and Tools are the bottom half.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 27

The top most is the output from the Assurance Application along with the Telemetry displaying the overall network health. These areas contain the primary components for creating and managing the Solutions provided by the DNA Center Appliance. You can also navigate to the DNAC tools by clicking on the square

Step 6.

at the top right.

The System Settings pages control how the DNA Center system is integrated with other platforms show information on users and applications and provide the ability to perform system backup and restore. To view the System Settings, click on the gear Settings.

© 2019 Cisco Systems Inc.

at the top right, and then select System

Solutions Readiness Engineering

P a g e | 28

Step 7.

On the System 360 Tab, DNA Center displays the number of currently running primary services. There should* be eighty-eight (110) running services in the lab environment.

*Note: The one hundred and ten (110) services number is current as of this lab environment’s DNA Center 1. 2.10 The number may vary in production depending on installed packages. If a number other than 110 is displayed, please notify your instructor.

Step 8.

To view the running services, click the

© 2019 Cisco Systems Inc.

button.

Solutions Readiness Engineering

P a g e | 29

Deployment Note: DNA Center can take over 90-120 minutes to be completely initialized after powering on. This large collection of services takes time to fully be instantiated. Note: Individual service log information can be accessed by hovering over a service and clicking the Grafana or Kibana logos. These features are beyond the scope of this guide.

Click the to close the Services pane. Remain on the System Settings page.

DNA Center Package Management Because DNA Center is powered by a collection of applications, tools, and processes, these individual components can be updated without needing to redeploy DNA Center from scratch. Updates of components can provide new functionality, as well as new applications and tools. Step 9.

Navigate to the Software Updates tab, to download/upgrade the DNAC software version and install the updated packages.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 30

Note: Shortcut to view the Software Updates is through clicking the cloud icon at the top right. You can also accomplish this by doing ' maglev package status ' command on DNAC 1.2.10 console saved in terminal application SecureCRT on Jump Host desktop and logging in (username: admin, password: CiscoDNA!) if needed before running the command.

Deployment Note: Recall that the solution, Software Defined Access, is dependent on the DNA Platform version, the Individual package versions, and the IOS / IOS XE software versions. Research should be performed before arbitrarily updating packages in DNA Center. Note: The screen shot above was taken during an early upgrade cycle. The DNA Center in the lab may or may not show updates available, as DNA Center updates are being released approximately every two weeks.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 31 If updates are available, please DO NOT attempt to update the DNA Center appliance in the lab. This lab guide is currently based on DNA Center 1.2.10.

This completes Exercise 1

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 32

Exercise 2: Reviewing the pre-deployed Fabric SITE 1 The prime agenda of this lab is to implement Software Defined Access for Distributed Campus (Multi-site), hence in order to keep it precise and focused we have pre -deployed one fabric site named as San Jose_FabricSite1. This exercise is intended to walk through the setup for Fabric SITE 1, so that we can follow along with the deployment of Fabric SITE 2 from the scratch. Step 10.

Return to DNA Center in the browser. Click on the (Tool button) on the top right on the home page and select Discovery.

Step 11.

There should be an already existing discovery named Fabric Site 1 Devices.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 33

Step 12.

Fabric Site 1 would constitute an Edge Node 1 and a co-located Control Border node (CP-BN_FS1).

Next, we will validate the hierarchical design of Fabric SITE 1, constructed under DNAC Design application.

Sites and Buildings – Network Hierarchy Tab Step 13.

Click the button to return to the DNA Center dashboard. Click the Design button in the top panel.

This navigates to Design > Network Hierarchy. Verify that a network hierarchy Global / San Jose_FabricSite1 / Building 11 / Floor 1 has been created already

• Area: San Jose _FabricSite 1 • Building: Building 11 • Building Address: 350 East Tasman Drive, San Jose, California 95134. © 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 34

• Floor: Floor 1

Validate Shared Common Servers – Network Settings Tab (Part 1) DNA Center allows saving common network resources and settings with Design Application’s Network Settings sub-application (tab). As described earlier, this allows information pertaining to the enterprise to be stored so it can be reused throughout DNA Center workflows. The idea is to define once and use many. By default, when clicking the Network Settings tab, newly configured settings are assigned as Global network settings. They are applied to the entire hierarchy and inherited to each site, building, and floor. It is possible to define specific network settings and resources to specific sites. The site-specific feature will be used during LAN Automation. For this lab deployment, the entire network will share the same network settings and resources. Deployment Note: For a Software-Defined Access workflow, AAA, DHCP, and DNS servers are required to be configured. For an Assurance Workflow, SYSLOG, SNMP, and NetFlow servers must be configured. These servers for Assurance need to point to the DNA Center’s IP Address (192.168.100.10 in the Lab).

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 35

Step 13.

From the Design Application, click the Network Settings tab.

Step 14.

All available shared services servers should be available. Confirm this by verifying that AAA Server is now shown on the page.

DNA Center 1.1 and further provides an option to configure separate AAA servers for network users and for clients/endpoints. In DNA Center 1.0, only the RADIUS protocol was supported for network users. In DNA Center 1.1, both TACACS and RADIUS protocols are supported for network users. These changes from DNA Center 1.1 are referred to as Multiple AAA.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 36 Deployment Note: TACACS is not supported for Client/Endpoint authentication in DNA Center

Step 15.

Valiate the configuration of the remaining servers using the information in the table below:

Field

Value

DHCP Server

198.18.133.30

DNS Server – Domain Name

dna.local

DNS Server – IP Address

198.18.133.30

SYSLOG Server

192.168.100.10

SNMP Server

192.168.100.10

Netflow Collector Server IP Address

192.168.100.10

Netflow Collector Server Port

2055

NTP Server

192.168.100.6

Time Zone

EST5EDT

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 37

Device Credentials – Network Settings Tab (Part 2) While configuring the earlier Discovery job for Fabric SITE 1, some credentials were added the Save as Global setting. Applying that setting will populate those specific credentials to the Device Credentials tab of the Network Settings of the Design Application. Global Device Credentials should be the credentials that most of the devices in a deployment use. The Save as Global setting also populates these credentials automatically in the Discovery tool. In this way, they do not need to be re-entered when configuring a future Discovery job.

Step 16.

Verifying the credential population in Discovery tool tab.

DNA Center has the ability to store multiple sets of Global Device Credentials. However, a specific set must be defined for use in LAN Automation. The minimum required credentials are CLI, SNMPV2C Read, and SNMPV2C Write. Note: There can be a maximum of five (5) global credentials defined for any category.

Step 17.

Navigate to Design > Network Settings > Device Credentials.

Step 18.

In the Device Credentials tab, click the button the username Operator.

underneath CLI Credentials for

The button changes to

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 38

Step 19.

Click the Read button underneath SNMP Credentials. Ensure SNMPV2C Read micro-tab is selected. (This is indicated by the words being in grey). The button changes to

Step 20.

Click the SNMPV2C Write micro-tab. It will change color from blue to grey. Click the

Write button underneath SNMP Credentials.

The button changes to

Step 21.

.

Click Save.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 39

IP Address Pools – Network Settings Tab (Part 3) DNA Center supports both manually entered IP address allotments as well as integration with IPAM solutions, such as Infoblox and Bluecat. Deployment Note: DHCP IP Address Pools required in the deployment must be manually defined and configured on the DHCP Server. DNA Center does not provision the actual DHCP server, even if it is a Cisco device. It is simply setting aside pools as a visual reference. These Address Pools will be associated with VN (Virtual Networks/VRFs) during the Host Onboarding section. Note: The IP address pools cannot have sub pools and cannot have any assigned IP addresses from the IP address pool.

About Global and Site-Specific Network Settings Consider a large continental-United States network deployment with sites in New York and Los Angeles. Each site would likely use their own DHCP, DNS, and AAA (ISE Policy Service Nodes – PSN). For deployments such as these, it is possible to configure site-specific Network Settings for Network, Device Credentials, IP Pools, and more. By default, when navigating to the Network Settings tab, the Global site is selected. This can be seen by the green vertical indicator. These green lines in DNA Center indicate current navigation location of the Design Application to help the user understand which item for which site is being configured. Figure: Network Hierarchy Position Indicators – DNA Center Design Application

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 40 These navigating indicators will be important where specific IP address pools must be reserved for a specific site. This is because Pool Reservations are not available in the Global Hierarchy and must be done at the Site, Building, or Floor level.

Step 22.

Navigate to Design > Network Settings > IP Address Pools.

Step 23.

Several IP Address Pools will be created for various uses. Some will be used for Device Onboarding (End-host IP Addresses) while others will be used for Guest Access, and Infrastructure.

In the IP Address Pool tab, you should see two IP pools Production (172.16.101.0/24) and WiredGuest (172.16.250.0/24) already created at the Global Hierarchical level.

Step 24.

Click the (Apps Button) or the dashboard.

© 2019 Cisco Systems Inc.

button to return to the DNA Center

Solutions Readiness Engineering

P a g e | 41

About ISE and DNAC Integration Once ISE is registered with Cisco DNA Center, any device Cisco DNA Center discovers, along with relevant configuration and other data, is pushed to ISE. Users can use Cisco DNA Center to discover devices and then apply both Cisco DNA Center and ISE functions to them, as these devices will be exposed in both applications. Cisco DNA Center and ISE devices are all uniquely identified by their device names. Cisco DNA Center devices, as soon as they are provisioned and belong to a particular site in the Cisco DNA Center site hierarchy, are pushed to ISE. Any updates to a Cisco DNA Center device (such as change to IP address, SNMP or CLI credentials, ISE shared secret, and so on) will flow to the corresponding device instance on ISE automatically. When a Cisco DNA Center device is deleted, it is removed from ISE as well. Please note that Cisco DNA Center devices are pushed to ISE only when these devices are associated to a particular site where ISE is configured as its AAA server. During the integration of ISE and DNA Center, all Scalable Group Tags (SGTs) present in ISE are pulled into DNA Center. Whatever policy is configured in the (TrustSec) egress matrices of ISE when DNA Center and ISE are integrated are also pulled into DNA Center. This is referred to as the Day 0 Brownfield Support: If policies are present in ISE at the point of integration, those policies are pulled in DNA Center and populated. Except for the SGTs, anything TrustSec and TrustSec Policy related that is created directly on ISE OOB (out-of-band) from DNA Center after the initial integration will not be available or be displayed in DNA Center. There is a cross launch capability in DNA Center to see what is present in ISE with respect to TrustSec Policy.

DNAC and ISE software compatibility For a successful integration and full-fledged functionality, it is very important to consider the software compatibility between SDA, DNAC and ISE.

https://www.cisco.com/c/en/us/solutions/enterprise-networks/software-definedaccess/compatibility-matrix.html

Note: The information above is current as of DNA Center 1.2.10 Additional capabilities in the future may extended the integration of ISE and DNA Center. Deployment Note and Additional Caveat: If something is created OOB in ISE after initial integration with DNA Center, then CoA (Change of Authorization) Pushes needs to be done manually. Generally, in the ISE GUI, changes to the TrustSec Matrix trigger a CoA Push down to all devices. CoA needs to be done manually if a TrustSec policy is created OOB (created in ISE GUI) after initial integration with DNA Center.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 42

Step 25.

On the System 360 Tab, DNAC allows you to add the AAA server i.e. Cisco Identity Service. Engine (ISE). Cisco DNA Center provides a mechanism to create a trusted communications link with Cisco. ISE and permits Cisco DNA Center to share data with ISE in a secure manner. In this lab, we have pre-configured the ISE server using the below credentials and the fields:

DNA Center will begin integrating with ISE using pxGrid. This includes the process of mutual certificate authentication between DNA Center and ISE.

Step 26.

Open a new browser tab, and log into ISE using IP address https://192.168.100.20 and Credentials.

▪ ▪

User: admin Password: ISEisC00L

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 43

Step 27.

Once logged in, go to Administration > pxGrid Services.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 44

Step 28.

Validate the pxGrid connection is online. The online connection information will appear on the bottom of the page

Step 29. Validate that dnac1.2.10 subscriber (client) and ensure its Online status. There are other subscribers (clients) shown in the list in the center of the page along with their status.

Step 30.

Click the (Apps Button) or the Center dashboard.

button to return to the DNA

This completes Exercise 2

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 45

Exercise 3: Using the DNA Center Discovery Tool In DNA Center, the Discovery tool is used to find existing devices using CDP (Cisco Discovery Protocol), LLDP (Link Layer Discovery Protocol) or using IP address ranges. IP Range will be used in the lab. Before DNA Center can perform automation and configuration on a device, it must be discovered and added to inventory. Using the discovery tool will accomplish both of these tasks.

Step 31.

Return to DNA Center in the browser. Click on the Discovery tool from the home page.

Step 32.

This opens the New Discovery page. Enter the Discovery Name as Fabric Site 2 Devices.

Step 33.

Select the

Range button, which now changes to

.

Note: Outside of the lab environment, this IP address could be any Layer-3 interface or Loopback Interface on any switch that DNA Center has IP reachability to. In this lab, DNA Center is directly connected to the LabAccessSwitch on Gig 1/0/12. That interface has an IP address of 192.168.100.6. The LabAccessSwitch is also DNA Center’s default gateway to the actual Internet. It represents the best starting point to discover the lab topology.

Step 34.

Since the lab is relatively small, we shall use the loopback IP range of all the devices 192.168.255.6 – 192.168.255.10.

Note: DNA Center uses the CDP table information (show cdp neighbors) of the defined device (192.168.100.6 /LabAccessSwitch) to find CDP neighbors. It will continue to find CDP neighbors to the depth provided by CDP level. This is done by querying the CISCO-CDP-MIB via SNMP.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 46

Step 35.

Change the Preferred Management IP to Use Loopback.

Note: This will instruct DNA Center to use the Loopback IP address of the discovered equipment for management access. DNA Center will use Telnet/SSH to access the discovered equipment through their Loopback IP address. Later, DNA Center will configure the Loopback as the source interface for RADIUS/TACACS+ packets.

Step 36.

At the Credentials section, you will see that the credentials are already added using the below table. Please note the credentials were added and Save as Global, therefore they are already populated. At a minimum, CLI and SNMPv2 Read/Write credentials must be defined. The routers and switches will be discovered using the CLI and SNMPv2 credentials.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 47

CLI Credentials

SNMP v2c Read Credentials

SNMP v2c Write Credentials

Field

Value

Save as Global

Username

Operator

Yes

Password

CiscoDNA!12345

Enable Password

cisco

Name/Description

RO

Read Community

RO

Name/Description

RW

Write Community

RW

Yes

Yes

Credentials that had the Save as Global option selected appear in Purple(ish). Credentials that do not use this option are considered Job Specific. They are only used for this particular Discovery job and appear in Green(ish). Ensure the credentials are appropriately populated similarly to the screen shot below

Note: The Save as Global option saves the credentials to the Design > Network Settings > Device Credentials screen. These can be used later as part of LAN Automation. The Save button must be clicked after entering each credential set. This populates the Credentials section in the center of the screen.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 48

Step 37.

The final step is to enable Telnet as a discovery protocol for any devices not configured to support SSH. Scroll down the page and click to open the Advanced section. Click the protocol Telnet. Ensure it has a blue check mark to it. A warning message should appear. Click OK to close it. Rearrange the order so that Telnet comes before SSH.

Note: The protocol order determines the order that DNA Center will use to try to access the device using the VTY lines. If the first protocol method is unsuccessful, DNA Center will attempt the next protocol. Once a method is successful, DNA Center will use that method for future device access and configuration. Telnet is used only because of the nature of this lab environment. SSH should be used in production.

Step 38.

Click the Start button in the lower right-hand corner to begin the discovery job.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 49

Step 39.

As the Discovery progresses, the page will present the devices on the right-hand side. The Discovery status will change to

to

Note: While devices may show as Discovered and the Discovery job may appear Complete, the entire process itself is not complete until devices have fully populated in the Inventory tool.

Step 40.

Verify that the Discovery process was able to find four (4) devices.

Step 41.

Verify that the devices with the following IP addresses have been discovered

▪ ▪ ▪ ▪

192.168.255.6 192.168.255.7 192.168.255.8 192.168.255.9

- LabAccessSwitch.dna.local - FusionInternal.dna.local - CP-BN_FS2.dna.local - TransitControlPlane.dna.local

Step 42.

Ensure that there are devices.

Step 43.

For WLC and the Edge Node 2, we will run another Discovery task separately.

Step 44.

Let’s start with EdgeNode 2 Enter the Discovery Name as Fabric Site 2_EdgeNode 2

for Status, ICMP, SNMP, and CLI for all four (4) discovered

and Discovery type as Range. (steps same as above discovery job).

Step 45.

Manually input the IP From and To address as 192.168.255.2 which is the IP of EdgeNode2

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 50

Step 46.

Verify that the device with the following IP addresses have been discovered



192.168.255.2- EdgeNode2.dna.local

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 51

Step 47.

Ensure that there are devices.

Step 48. Step 49. Step 50.

For WLC, we will run another Discovery task.

Step 51.

Expand the Credentials section, and then click on Add Credentials. The Add Credentials pane will slide in from the right side of the page.

for Status, ICMP, SNMP, and CLI for all four (4) discovered

Enter the Discovery Name as WLC and Discovery type as Range. Manually input the From and to address as 192.168.51.240 which is the IP of WLC.

Use the table below to populate the applicable credentials. The Save button must be clicked after entering each credential set.

CLI Credentials

SNMP v2c Read Credentials

SNMP v2c Write Credentials

© 2019 Cisco Systems Inc.

Field

Value

Save as Global

Username

Operator

Checked

Password

CiscoDNA!

Enable Password

cisco

Name/Description

RO

Read Community

RO

Name/Description

RW

Write Community

RW

Yes

Yes

Solutions Readiness Engineering

P a g e | 52

Note: You may use the credentials for SNMP (Read & Write) that were saved earlier from the previous discovery as they are the same.

Step 52.

Disable the CLI Credentials from previous discovery to give precedence to the WLC credentials.

Step 53.

The final step is to enable Telnet as a discovery protocol for any devices not configured to support SSH. Scroll down the page and click to open the Advanced section. Click the protocol Telnet. Ensure it has a blue check mark to it. A warning message should appear. Click OK to close it. Rearrange the order so that Telnet comes before SSH.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 53

Step 54.

Verify that the device with the following IP addresses have been discovered ▪ 192.168.51.240 – WLC_3504.dna.local

Step 55.

Click the (Apps Button) or the Center dashboard.

button to return to the DNA

This completes Exercise 3

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 54

Exercise 4: Using the DNA Center Inventory Tool The DNA Center Inventory tool is used to add, update, and/or delete devices that are managed by DNA Center. The Inventory tool can also be used to perform additional functions and actions such as updating devices credentials, resyncing devices, and exporting the devices’ configuration. It also provides access to the Command Runner tool.

Step 56.

From the DNA Center Home Page, click the Inventory tool. A bookmark is also available in the browser.

Step 57.

All the discovered devices should show as Reachable and Managed. Their up time, last update time, and the default resync interval should also be displayed.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 55 Note: While the DNA Center Discovery App may show the discovery process has completed, the full process of discovery and adding to inventory is not completed until the Reachability Status and Last Inventory Collection Status are listed as shown above, Reachable and Managed, respectively. The full process from beginning Discovery to being added to Inventory may take up to ten minutes based on the size of the lab topology. Expect longer times in production, particularly if the number of CDP hops is larger and if there is a larger number of devices. Note: Please make sure all the devices shown above are found in your pod. If they are not, please notify the instructor.

Step 58.

Additional information is available in the Inventory tool by adding columns. Use the Layout settings drop-down to add the Device Role, Config, and IOS/Firmware columns.

Click Apply.

Note: The Config columns allows a view of the running configuration of the device. The IOS/Firmware column is useful for quickly viewing the software version running on the devices.

Step 59.

Device Role controls where DNA Center displays a device in topology view in both the Provision Application and Topology tool. It does not modify or add any configuration with regards to the device role selected. Use the chart below to confirm/set each device to the role shown.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 56

Device

Device Role

EdgeNode1

Access

EdgeNode2

Access

CP-BN_FS1

Core

CP-BN_FS2

Core

FusionInternal

Border Router

LabAccessSwitch

Distribution

TransitControlPlane

Core

WLC_3504

Access

Step 60.

Click the (Apps Button) or the Center dashboard.

button to return to the DNA

This completes Exercise 4

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 57

Exercise 5: Using the DNA Center Design Application DNA Center provides a robust Design Application to allow customers of every size and scale to easily define their physical sites and common resources. Using a hierarchical format that is intuitive to use, the Design Application removes the need to redefine the same resource such as DHCP, DNS, and AAA Servers in multiple places when provisioning devices.

Creating Sites and Buildings – Network Hierarchy Tab Step 61.

From the DNA Center home page, click Design to enter the Design Application.

Step 62.

This navigates to Design > Network Hierarchy. Verify that a Fabric SITE 1 network hierarchy has been created already.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 58

Step 63.

A new Site Fabric SITE2 will be added, begin by clicking Add Site and adding a New area Milpitas_FabricSite2 .

Step 64.

Enter the area name Milpitas_FabricSite2 and click Add.

Step 65.

A new Building will be added to Milpitas_FabricSite2, Click the Milpitas_FabricSite2 and then click the gear sign and then Add

Building.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 59

Step 66.

Enter Building name as “Building 22” and address as “821 Alder Dr, Milpitas, CA 95035, USA” and click Add.

Step 67.

A new floor will be added to the Building 22 building in the Milpitas_FabricSite2 site. Begin by expanding Milpitas_FabricSite2 with

Step 68.

the button.

Click the

gear sign and then Add Floor. The Add Floor dialog box will appear.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 60

Step 69.

Enter the floor as Floor 2 then Upload the floor plan by navigating Desktop > DNAC 1.2.x > floorplans and pick any .png file (for illustration). Click Add.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 61

Note: DNA Center currently supports any raster image graphical file format. This includes .jpg, .gif, .png, .dxf, and .dwg. DXF was chosen in the lab simply based on the availability of a high-quality floor map.

Configuring Network Settings Step 70.

In Exercise 2, we have discussed the Network Setting which constitutes Device Credentials and defining the IP Pools. Now, IP Address Pool for Fabric SITE 2 will be defined as follows. Navigate to Design > Network Settings > IP Addresses Pools

Step 71.

Click Add IP Pool at the top right corner. The Add IP Pool dialog box appears.

Step 72.

Configure the IP Address Pools as shown in the table below. Because the DHCP and DNS servers have already been defined, they are available from the drop-down boxes and do not need to manually be defined. This demonstrates the define once and use many concept that was described earlier. The Overlapping checkbox should remain unchecked for all IP Address Pools.

Step 73.

Click the Save button between each IP Address Pool to save the settings

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 62

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 63

Step 74.

Verify that a Success notification appears when saving IP Address Pools.

Step 75.

Once completed, the IP Address Pools tab should appear as below:

Step 76.

Click the (Apps Button) or the Center dashboard.

button to return to the DNA

This completes Exercise 5

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 64

Exercise 6: Using the DNA Center Policy Application The Policy Application supports creating and managing Virtual Networks, supports Policy Administration and Contracts, and supports Scalable Groups using the Registry Tab. Most deployments will want to set up their SD-Access Policy (Virtual Networks and Contracts) before doing any SD-Access Provisioning. The general order of operation (for SDA) is Design, Policy, and Provision, corresponding with the order of the applications seen on DNA Center dashboard. In this section, the segmentation for overlay network (which has not yet been fully created) will be defined here in DNA Center Policy Application. This process virtualizes the overlay network into multiple self-contained Virtual Networks (VRFs).

Deployment Note: By default, any network device (or user) within a Virtual Network is permitted to communicate with other devices (or users) in the same Virtual Network. To enable communication between different Virtual Networks, traffic must leave the Fabric (Default) Border and then return, typically traversing a firewall or fusion router. This is process is done through route leaking and multi-protocol BGP (MP-BGP). This will be covered in later exercises.

Step77.

From the DNA Center home page, click Policy to enter the Policy Application.

Step 78.

This navigates to Policy > Dashboard. Verify that two (2) Virtual Networks (VNs) and eighteen (18) Scalable Groups (SGTs) are present.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 65

Note: The eighteen (18) SGTs were the SGTs present on ISE during the DNA Center and ISE integration. They were imported as described in the About Day 0 Brownfield Support section. DNA Center has two default Virtual Networks: DEFAULT_VN and INFRA_VN. The DEFAULT_VN is present to encourage NetOps personnel to use segmentation, as this why SDA was designed and built. At the present, it should be ignored – specific VNs will be created. Deployment Note: Future releases may remove the DEFAULT_VN. The INFRA_VN is for Access Points and Extended Nodes only. It is not meant for end users or clients. The INFRA_VN will actually be mapped to the Global Routing Table (GRT) in LISP and not a VRF instance of LISP. Despite being present in the GRT, It is still considered part of the overlay network.

Creating VNs and Binding SGTs to VNs Within the Software-Defined Access solution, two technologies are used to segment the network: VRFs and Scalable Group Tags (SGTs). VRFs (VNs) are used to segment the network overlay itself. SGTs are used to segment inside of particular VRFs. Encapsulation in SDA embeds the VRF and SGT information into the packet in order to enforce policy end-to-end across the network. The routing control plane of the overlay (the LISP process) will make forwarding decision based on the VRF information. The routing policy plane of the overlay makes forwarding decisions based on the SGT information. Both pieces of information must be present for a packet to traverse the Fabric. This exercise will focus on creating Virtual Networks and associating SGTs with them as this is the minimum requirement for packet forwarding in an SDA Fabric.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 66 Note: The DNA Center 1.0 Lab Guide focused on creating detailed policy that was then pushed to ISE. This was used to show the interaction between these two platforms. Policy is a critical component of the SDA solution, although policy creation is not the focus of this Lab Guide. Please refer to the DNA Center 1.0 SRE Lab Guide and the DNA Center User Guide for information on Group-Based Access Control, Registry, Policies, and Contracts.

Step 79.

Click the Virtual Network button or the Virtual Network tab.

Step 80.

Begin by creating a new Virtual Network. Click

Step 81.

Enter the Virtual Network Name of CAMPUS.

Note: Please pay attention to Capitalization. The name of the virtual network defined in DNA Center will later be pushed down the Fabric devices as a VRF Definition. VRF Definitions on the CLI are case sensitive. VRF Campus and VRF CAMPUS would be considered two different VRFs.

Step 82.

Multiple Scalable Groups can be selected by clicking on them individually or by clicking and dragging over them. Move all Available Scalable Groups except BYOD, Guest, Quarantined_Systems, and Unknown to the right-hand column Groups in the Virtual Network.

Step 83.

Click Save.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 67

Step 84.

Verify the CAMPUS VN has been created and contains fourteen (14) SGTs.

Step 85.

Click the button to create a second VRF. Enter the Virtual Network Name of GUEST.

Step 86.

Select and move the BYOD, Guest, Quarantined_Systems, and Unknown SGTs from Available Scalable Groups to the right-hand column Groups in the Virtual Network.

Step 87.

Click Save.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 68

Note: The Guest Virtual Network box should remain unchecked. This is a Fabric Enable Wireless (FEW) option. When this option is selected, a dedicated Guest border node and Guest control plane node can be provisioned. These items will be covered in future lab guides on Fabric Wireless.

Step 88.

Verify the GUEST VN has been created and contains four (4) SGTs.

Step 89.

Click the (Apps Button) or the Center dashboard.

button to return to the DNA

Note: In the Host Onboarding section, the VNs that were just created will be associated with the created IP Address Pools. This process is how a particular subnet becomes associated with a particular VRF

This completes Exercise 6 © 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 69

Exercise 7: Using the DNA Center Provision Application Now that devices are discovered, added to Inventory, Network Design elements have been completed, and the VNs are created, the (LISP) overlay can be provisioned, and Host Onboarding can begin.

About Provisioning Provisioning consists of two completely separate actions: 1. Assigning Devices to Site, Building, or Floor 2. Provision Devices added to Site, Building, or Floor When completing the first step, assigning a device to a site, DNA Center will push certain site-level networks settings configured in the Design Application to the devices whether or not they will used as part of the Fabric overlay. Specifically, DNA Center pushes the Netflow Exporter, SNMP Server and Traps, and Syslog network server information configured in the Design Application for a site to the devices assigned to the site. Note: When the devices are provisioned to the site, the remaining network settings from the Design Application will be pushed down to these devices. They include time zone, NTP server, and AAA configuration.

The second step, provisioning the device (that has been assigned to a site), is a prerequisite before that device can be added to the Fabric and perform a Fabric role.

Step 90.

From the DNA Center home page, click Provision to enter the Provision Application.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 70

Step 91.

The Provision Application will open to the Device Inventory Page. Verify that the current inventory shows seven (7) devices.

Assigning Devices to a Site – Provisioning Step 1 The first step of the provisioning process begins by selecting devices and associating (assigning) them to a site, building, or floor previously created with the Design Application. Before devices can be provisioned, they must be discovered and added to Inventory. This is why the Discovery tool and Design exercises were completed first. There is a distinct order-of-operation in DNA Center workflows. In this lab, all devices in Inventory will be assigned to a site (Step 1). After that, only some devices will be provisioned to a site (Step 2). Among that second group, only certain devices could be provisioned to a site will become part of the Fabric and operate in a Fabric Role. This is the level of granularity that DNA Center provides in Orchestration and Automation. In the lab, all devices provisioned to the site will become receive further provisioning to be operate in a Fabric Role. Use the table below for reference on how devices will be added to site, provisioned to site, and used in a Fabric Role.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 71

Assign to Site

Provisioned to Site

Assign as Fabric Role

EdgeNode1

Yes

Yes

EdgeNode2

Yes

Yes

CP-BN_FS1

Yes

Yes

CP-BN_FS2

Yes

Yes

FusionInternal

-

-

LabAccessSwitch

-

-

TransitControlPlane

Yes

-

Deployment Note: There are two level of compatibility in DNA Center and two different support matrices. The first level is being compatible with DNA Center – or, ostensibly, compatible for Assurance. The second level is being compatible with Software-Defined Access. This is why the distinction between SDA and DNA Center versioning and platform support was called out specifically at the beginning of the lab. To provision a device that has been assigned to a site (Step 2), it must be a device supported for SDA. Note: The two Fusion routers are ISR4451s and are not supported in any SDA role. The difference is subtle, but is critical to understand, as a DNA Center deployment is not required to use Software-Defined Access. Just because a device can be Automated by DNA Center does not necessarily mean this device is being automated by the SDA process in DNA Center. Device Automation and Software-Defined Access are technically separate packages on the Appliance. Refer the combability matrix to verify SD- Access supported hardware and software with respect to the fabric roles.

Step 92.

Devices will be assigned to Fabric SITE 1. From the Provision > Device Inventory page, click the top LabAccessSwitch, EdgeNode1 and CP-BN_FS1 devices. The box changes to

© 2019 Cisco Systems Inc.

checkbox to select

and all devices are highlighted.

Solutions Readiness Engineering

P a g e | 72

Step 93.

Click Actions

Step 94.

Click Assign Device to Site. DNA Center navigates to the Assign Devices to Site page.

Step 95.

Click on Choose a site button, a fly out will open. Select site Global/SanJose_FabricSite1/Building 11/Floor 1

Step 96.

Click Save

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 73

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 74

Step 97.

Click Apply to All

Step 98.

Click Apply on the right-hand side.

Step 99.

Verify that a Success notification appears indicating the selected devices were added to the site.

Step 100.

Now devices will be assigned to Fabric SITE 2. From the Provision > Device Inventory page, click the top checkbox to select devices EdgeNode2, CP-BN_FS2, FusionInternal and TransitControlPlane. The box changes to

© 2019 Cisco Systems Inc.

and all devices are highlighted.

Solutions Readiness Engineering

P a g e | 75

Step 101.

Click Actions

Step 102.

Click Assign Device to Site. DNA Center navigates to the Assign Devices to Site page.

Step 103.

Click on Choose a site button, a fly out will open. Select site Global/Milpitas_FabricSite2/Building 22/Floor 2

Step 104.

Click Save.

Step 105.

Click Apply to All

Step 106.

Click Apply on the right-hand side.

Step 107.

Verify that a Success notification appears indicating the selected devices were added to the site.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 76

Step 108.

Click the (Apps Button) or the Center dashboard.

button to return to the DNA

This completes Exercise 7

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 77

Exercise 8: Provision Devices to a Site Recall from earlier exercises that provisioning has two different steps that are completely separate actions: 1. Assign Devices to Site, Building, or Floor 2. Provision Device assigned to Site, Building, or Floor The first step has been completed. All devices in the topology have been added to the SanJose_FabricSite1/Building 11/Floor 1 and Milpitas_FabricSite2/Building 22/Floor 2. When the devices are provisioned to the site, the remaining network settings from the Design Application will be pushed down to these devices. They include time zone, NTP server, and AAA configuration. Deployment Note: Devices that are provisioned to the site will now authenticate and authorize their console and VTY lines against the configured AAA server. Note and Reminder: During Step 1, DNA Center pushes the Netflow Exporter, SNMP Server and Traps, and Syslog network server information configured in the Design Application for a site to the devices assigned to the site.

Devices of dissimilar types can be assigned (added) (Step 1 of Provisioning) to a Site simultaneously. However, only devices of the same type can be simultaneously Provisioned to a Site (Step 2 of Provisioning). This will be clearer in a few steps.

The following devices will be provisioned to the Site in the next steps.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 78

Note: For brevity, the devices are referred to by their hostname, not their FQDN as displayed in the DNA Center Inventory.

Step 109.

From the Provision > Device Inventory page, click the CP-BN_FS2.Their boxes change to

Step 110.

Click Actions

Step 111.

Click Provision.

© 2019 Cisco Systems Inc.

next to the CP-BN_FS1 and

and those devices are highlighted.

Solutions Readiness Engineering

P a g e | 79

Step 112.

DNA Center opens to the Provision Devices page at Step Assign Site. Verify the assigned sites the devices: ▪ CP-BN_FS1: Global/San Jose_FabricSite1/Building 11/Floor 1 ▪ CP-BN_FS2: Global/Milpitas_FabricSite2/Building 22/Floor 2 Click Next.

Step 113.

DNA Center moves to Step Configuration. This section is not currently in use and is noted as being not applicable. Click Next.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 80

Step 114.

DNA Center moves to Step Advanced Configuration. This section would list available configuration templates had any been configured. Template Editing is outside the scope of this lab guide. Click Next.

Step 115.

DNA Center moves to step Summary. This page lists a summary of the selected devices, their details, and which network settings will be provisioned to the devices. Click Deploy.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 81

Step 116.

The DNA Scheduler appears. The Scheduler allows configuration to be configured in advanced and then actually provisioned during a Change Window. The scheduler can also be run on-demand. Ensure Now is selected, and then click Apply.

Step 117.

DNA Center will validate the configuration. Verify that Success notifications appear. A notification will display for each device as provisioning starts and completes.

Step 118.

DNA Center will return to Provision > Device Inventory. Confirm that the CP-BN_FS1 and CP-BN_FS2.have been successfully provisioned.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 82

Step 119.

From the Provision > Device Inventory page, click the EdgeNode2.Their boxes change to

next to the EdgeNode1 and

and those devices are highlighted.

Step 120. Step 121.

Click Actions Click Provision.

Step 122.

DNA Center opens to the Provision Devices page at Step Assign Site. Verify the assigned sites the devices: ▪ EdgeNode1: Global/San Jose_FabricSite1/Building 11/Floor 1 ▪ EdgeNode2: Global/Milpitas_FabricSite2/Building 22/Floor 2 Click Next.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 83

Step 123.

DNA Center moves to Step Configuration. This section is not currently in use and is noted as being not applicable. Click Next.

Step 124.

DNA Center moves to Step Advanced Configuration. This section would list available configuration templates had any been configured. Template Editing is outside the scope of this lab guide. Click Next.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 84

Step 125.

DNA Center moves to step Summary. This page lists a summary of the selected devices, their details, and which network settings will be provisioned to the devices. Click Deploy.

Step 126.

The DNA Scheduler appears.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 85 The Scheduler allows configuration to be configured in advanced and then actually provisioned during a Change Window. The scheduler can also be run on-demand. Ensure Now is selected, and then click Apply.

Step 127.

DNA Center will validate the configuration. Verify that Success notifications appear. A notification will display for each device as provisioning starts and completes.

Step 128.

DNA Center will return to Provision > Device Inventory. Confirm that the EdgeNode1 and EdgeNode2.have been successfully provisioned.

Step 129.

Now let’s repeat the same steps for provisioning the TransitControlPlane. Confirm that the CP-BN_FS1, CP-BN_FS2, EdgeNode1 and EdgeNode2 and TransitControlPlane have been successfully provisioned.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 86

This completes Exercise 8

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 87

Exercise 9: Reserve IP Address Pool As part of earlier exercises in the lab, creating the IP Pools have been completed and marked accordingly. The next step in the sequence before creating the Fabric Overlay is to reserve a LAN Pool using the Global IP Pool defined and created in those early steps. Reserving an IP pool tells DNA Center to set aside that block of addresses for special use. The two most common uses are Underlay Automation and Border BGP configuration (Layer-3 handoff). An IP Address Pool can be reserved at the Site, Building, or Floor level in the Hierarchy. It cannot be reserved at the Global level. Once reserved, the Pool (or portions of the Pool) can be reserved for special use. All devices in the network are currently assigned to either Floor1 (Fabric SITE 1) or Floor 2 (Fabric SITE 2). Address Pool reservation could be done higher in the hierarchy, although it will be reserved at the same level that the Seed Device (LabAccessSwitch) is currently assigned to – Floor Level.

Step 130.

From the DNA Center home page, enter the Design Application by clicking the DESIGN button at the top panel.

Step 131. Step 132. Step 133.

Select the Network Settings tab. Select the IP Address Pools sub-tab. In the site hierarchy, expand San Jose_FabricSite1 using the Click the San Jose_FabricSite1 and expand till Floor 1.

button.

About Network Settings Inheritance By default, when working in the Network Settings tab, the settings defined and configured are Global settings – unless specifically created at a lower level. They are applied to the entire hierarchy and inherited by each site, building, and floor. It is possible to define specific network setting and resources to specific sites as discussed earlier in the guide.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 88 Sites, buildings, and floors automatically inherit settings defined higher in the hierarchy. These inherited settings are indicated using the icon . When working in Network Settings, it is critical to know the current location in the hierarchy and understand the current inheritance.

Step 134.

This inheritance warning will appear every time a lower site, building, or floor is selected in the hierarchy. To reduce these pop-ups, click Don’t show again and press OK.

Step 135.

Note the new column Inherited From and the new Reserve IP Pool button. Neither are present when in the Global section of the hierarchy.

Step 136.

Click the Reserve IP Pool button. The Reserve IP Pool dialog box appears.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 89

Step 137.

Add the details using the below table:

IP Pool Name

IP Subnet

Global IP Pool

Gateway IP

DHCP Server

DNS Server

Production_Floor1

172.16.101.0

Production

172.16.101.1

198.18.133.30

198.18.133.30

WiredGuest_Floor1

172.16.250.0

WiredGuest

172.16.250.1

198.18.133.30

198.18.133.30

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 90

Step 138.

The final screenshot should look up as follows:

Step 139.

Now expand Milpitas_FabricSite2 using the button. Click the Milpitas_FabricSite2 and expand till Floor 2.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 91

Step 140.

Click the Reserve IP Pool button. The Reserve IP Pool dialog box appears. Add the details using the below table:

IP Pool Name

IP Subnet

Global IP Pool

Gateway IP

DHCP Server

DNS Server

Staff_Floor2

172.16.200.0

Staff

172.16.200.1

198.18.133.30

198.18.133.30

FusionInternal_Floor2

192.168.30.0

FusionInternal

192.168.30.1

198.18.133.30

198.18.133.30

Step 141.

The final screenshot should look up as follows:

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 92

This completes Exercise 9

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 93

Exercise 10: Creating the Fabric Overlay The Fabric Overlay is the central component that defines SDA. In documentation, devices that are supported for SDA are devices that are capable of operating in one of the Fabric Overlay Roles – operating as a Fabric Node. From a functionality standpoint, this means the device has the ability to run LISP and to encapsulate LISP data packets in the VXLAN GPO format. When assigning devices to a fabric role (Border, Control Plane, or Edge), DNA Center will provision a VRF-based LISP configuration on the device. Deployment Note: On IOS-XE devices this LISP configuration will utilize the syntax that was first introduced in IOS XE 16.5.1 and enforced in 16.6.1.

Creating the Fabric Overlay is a multi-step workflow. Devices must be discovered, added to Inventory, assigned to a Site, and provisioned to a Site before they can be added to the Fabric. Each of Fabric Overlay steps are managed under the Fabric tab of the Provision Application. 1. Identify and create Transits 2. Create Fabric Domain (or use Default) 3. Assign Fabric Role(s) 4. Setup Up Host Onboarding

Identify and create Transits With version 1.2.X, the concept of SD-Access Multisite was introduced. Also, there is an obvious requirement of connecting the SD-Access Fabric with the rest of the company. As a result, the new workflow asks for you to create a “Transit” which will connect the fabric to beyond its domain. As mentioned, there are 2 types of Transits: 1. SDA Transit: To connect 2 or more SDA Fabric Domains with each other (requires an end to end MTU of 9100) 2. IP Transit: To connect the SDA Fabric Domain to the Traditional network for a Layer 3 hand-off In this lab, you will be configuring an IP transit to connect the CPN-BN_FS1 with the Fusion Internal Router and SDA Transit to connect SanJose_Fabric SITE 1 with Milpitas_Fabric SITE 2.

Step 142.

From the Fabric tab in the Provisioning Application, click on “Add Fabric or Transit” and then click on “Add Transit”.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 94

Step 143.

IP-Transit for to the Fusion Internal will be created. ▪ Give the Transit a name: “FusionInternal” ▪ Choose the Transit Type “IP-Based” ▪ Choose the Routing Protocol to BGP ▪ The Remote AS Number will 65444 Click Add.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 95

Step 144.

DNA Center will create the IP Transit. Verify a

© 2019 Cisco Systems Inc.

Success notification appears indicating the Transit was created.

Solutions Readiness Engineering

P a g e | 96

Concept of SDA- Transit and Transit Control Plane SD-Access transits are used to create larger fabric domains sharing a common policy and provisioning by interconnecting a fabric site with one or more additional sites. Each fabric site has an independent set of control plane nodes, edge nodes, border nodes, wireless LAN controllers, and ISE nodes. By virtue of each site having all components required to make it independently survivable, the SD Access transit is useful for building a distributed campus — multiple independent fabrics representing different buildings at a location or as part of a metropolitan network. The key consideration for the distributed campus design using SD-Access transit is that the network between fabric sites and to Cisco DNA Center should be created with campus-like connectivity. Note: The connections should be high-bandwidth (Ethernet full port speed with no sub-rate services), low latency, and should accommodate the MTU setting used for SD-Access in the campus network (typically 9100 bytes).

You create an SD-Access transit by associating it with two things: a network that has connectivity to the fabric sites that are to be included as part of the larger fabric domain, and a control plane node dedicated to the transit functionality which is termed as Transit Control Plane.

Step 146.

In order to create the SDA Transit, navigate from the Fabric tab in the Provisioning Application, click on “Add Fabric or Transit” and then click on “Add Transit”.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 97

Step 147.

Create the SDA Transit using below information: ▪ Give the Transit a name: “TCP_SDA” ▪ Choose the Transit Type “SD-Access” ▪ Choose the Site for the Transit Control Plane: “Global/Milpitas_FabricSite2/Building 22/Floor 2" ▪ Transit Control Plane: “TransitControlPlane.dna.local” Click Add.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 98

Step 148.

DNA Center will create the SDA Transit. Verify a

© 2019 Cisco Systems Inc.

Success notification appears indicating the Transit was created.

Solutions Readiness Engineering

P a g e | 99

Creating the Fabric Domain (Fabric SITE 2) A fabric domain is a logical construct in DNA Center. A fabric domain is defined by a set of devices that share the same control plane node(s) and border node(s). In a domain, end-host facing devices are added as edge node(s). Fabric Sites and Fabric Domains A fabric site is an independent fabric area with a unique set of network devices: control plane, border node, edge node, wireless controller, ISE PSN. Different levels of redundancy and scale can be designed per site by including local resources: DHCP, AAA, DNS, Internet, and so on. A fabric site can cover a single physical location, multiple locations, or only a subset of a location: ▪ Single location: branch, campus, or metro campus ▪ Multiple locations: metro campus + multiple branches ▪ Subset of a location: building or area within a campus A fabric domain can consist of one or more fabric sites + transit site. Multiple fabric sites are connected to each other using a transit site. Multi-Site Fabric Domain A multi-site fabric domain is a collection of fabric sites interconnected via a transit site and WLCs The transit site consists of control plane nodes that help interconnect multiple fabric sites. A Software-Defined Access (SDA) fabric comprises multiple sites. Each site has the benefits of scale, resiliency, survivability, and mobility. The overall aggregation of sites (that is, the fabric domain) must also be able to accommodate a very large number of endpoints and scale modularly or horizontally by aggregating sites contained within each site.

Step 149.

From the Fabric tab in the Provisioning Application, click on Transit” and then click on “Add Fabric”.

© 2019 Cisco Systems Inc.

“Add Fabric or

Solutions Readiness Engineering

P a g e | 100

Step 150.

An Add Fabric flyout window will appear. At this step DNAC will create the Fabric Domain. ▪ Enter the Fabric Name: Multisite_FabricDomain ▪ Select the location of Fabric SITE 1 (Global/San Jose_FabricSite1/Building 11/Floor 1) as in order to create the Fabric Domain we need to have minimum one fabric site in that fabric domain. Click Add.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 101 Note: You cannot use any special character (such as - symbol) as part of the Fabric domain name.

Step 151.

Verify a

Step 152.

Click on the Multisite_FabricDomain.

© 2019 Cisco Systems Inc.

Success notification appears indicating the Fabric Domain was created .

Solutions Readiness Engineering

P a g e | 102

Step 153.

Click button to add Fabric SITE2 as second fabric enabled site into the Multisite_FabricDomain.

Step 154.

Add the location to Fabric SITE2: Global/Milpitas_FabricSite2/Building22/Floor 2. Click Add.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 103

Step 155.

Verify a

Success notification appears indicating the Fabric Domain was created .

Step 156.

Now, once you have created the Fabric domain, added the fabric sites and created the transits, click on the Fabric sites to view the Fabric Infrastructure. Navigate Provision>Fabric> Multisite_FabricDomain>SanJose_FabricSite1>Floor1.

About Software-Defined Access Validation Feature in DNA Center When DNA Center is used to automate SDA, how can the automated configuration be validated? This was the key question and motivation behind the Validation feature. There are two types of validation, Pre-verification and (Post-) Verification. Pre-verification answers the question “Is my network ready to deploy Fabric?” by running several prechecks before a device is added to a Fabric. The (post-) verification validates a device’s state after a Fabric change has been made. Lab Guide Critical Note: All Pre- and Post-Verification steps listed in the lab guide are Optional due to time constraints. It is not possible to segment off some of the Pre- and Post- Verification steps present in the lab using different section headings indicating that these steps are optional. Please read through, although skip over the Pre- and Post- Verification steps if encountering time constraints.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 104

About Pre-Verification Pre-verification checks are run on devices that have been assigned a role, although have not been added to the Fabric yet. (This means that the Save button has not yet been clicked). Currently eight pre-verification checks are supported.

Note: The connectivity check depends on the device role selected and what devices are already present in the Fabric. During the connectivity check, DNA Center logs into the device and initiates a ping to the Loopback interface of another device. It does not specify a source interface (such as the local Loopback). If a device is selected as an edge node, DNA Center will perform a ping from that device to the control plane node and the border nodes. If the device is selected as a border node, DNA Center will perform a ping from that device to the control plane node(s) only.

About Verification (Post-Verification) This is performed after a device is added to the Fabric. There are two places where verification is supported – the initial topology map under Provision > Fabric > Select Devices page and the Provision > Fabric > Host Onboarding page. Verification checks whether the SDA provisioned configuration is present on the device.

Note: The Post-Verification check does not check for any configuration that may have been added manually to the device using the CLI. It is only checking for parameters configured by DNA Center during the provisioning workflows.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 105

Adding Devices to the Fabric To create a Fabric, at minimum, an edge node and control plane (node) must be defined. For communication outside of that Fabric, a border (node) must also be defined. Icon and text color for each device name in the Fabric Topology map are very important. Grey devices with grey text are not part of the Fabric or not currently part of the Fabric. They are either simply not assigned a Fabric Role or are an Intermediate Node – a Layer-3 device between two Fabric Nodes. Devices that are any color with blue text are currently selected. Devices with a blue outline have been assigned a Fabric Role, although the Save button has not been pressed, yet. This blue outline indicates Intention. Devices that are Blue have been added a Fabric role and have had that Fabric configuration pushed down to the device.

Decoding the Fabric Topology Map

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 106 1. 2. 3. 4.

Device is not part of the Fabric. Device is not selected. Device has been added as a Fabric role, although the Save button has not been pressed. Device has been added as a Fabric role and configuration has been pushed down to the device because the Save button has been pressed. 5. Device has been added as in the border/Control Plane role in the Fabric. 6. Device has been added in the edge role in the Fabric. When a device is clicked in the Fabric Topology Map, a dialog box appears.

Fabric Provisioning Options

Step 157.

Start adding devices to Fabric SITE1. Navigate Provision>Fabric> Multisite_FabricDomain>SanJose_FabricSite1> Building 11>Floor1.

Step 158.

From the topology map, click on device CP-BN_FS1.dna.local The text turns blue, indicating it is selected.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 107

Step 159.

Now from the popup select “Add as CP+Border” to Add it as co-located control plane & Border Node The grey icon now has a blue outline, indicating the intention to add this device to the Fabric.

This shall open a new fly out window to the right with additional settings to be done for the fabric border role.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 108

Step 160.

We shall add the transit that we created earlier. Add the details as follows:

Option

Role

Border to

Anywhere (Internal & external)

Local Autonomous Number

65333

Transits

TCP_SDA

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 109

Step 161.

A warning message will appear indicating the difference between all three types of Border(s). Click OK.

Step 162.

Observe that the device has a Blue outline to it, but it has not yet saved. You need to click on Save in the end.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 110

Step 163.

This shall open a new Fly-out window to the right. You need to click on Apply in the end.

Step 164.

It shall show you that the Fabric device provisioning has initiated and after it has pushed the requisite configurations, it shall show up as that the Device has been updated to the Fabric Domain Successfully.

Step 164.

Next, we will add next another device as Edge Node. From the topology map, click on device EdgeNode1.dna.local The text turns blue, indicating it is selected.

Step 165.

Now from the popup select “Add to Fabric” to Add it as Edge Node for Fabric SITE 1 The grey icon now has a blue outline, indicating the intention to add this device to the Fabric.

Step 166.

Observe that the device has a Blue outline to it, but it has not yet saved. You need to click on Save in the end.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 111

Step 167.

This will pop out a DNAC event scheduler, check NOW and Apply.

Step 168.

It shall show you that the Fabric device provisioning has initiated and after it has pushed the requisite configurations, it shall show up as that the Device has been updated to the Fabric Domain Successfully.

This completes the provisioning for Fabric SITE 1, Now let’s start provisioning Fabric SITE 2.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 112

Step 169.

Navigate Provision>Fabric> Multisite_FabricDomain>Milpitas_FabricSite2> Building 22>Floor2.

Step 170.

From the topology map, click on device CP-BN_FS2.dna.local The text turns blue, indicating it is selected.

Step 171.

Now from the popup select “Add as CP+Border” to Add it as co-located control plane & Border Node The grey icon now has a blue outline, indicating the intention to add this device to the Fabric.

This shall open a new fly out window to the right with additional settings to be done for the fabric border role.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 113

Step 172.

At this step refer the lab physical topology, wherein FusionInternal Router is directly connected to CP-BN_FS2 and will act as an IP-Transit between the communication between the constructed Fabric Domain and the Shared Services (DNS, DHCP servers) plus the WLC. Also, we will add the SDA Transit to build the communication between Fabric SITE1 and Fabric SITE 2. Add the details as follows:

Option

Role

Border to

Anywhere (Internal and External)

Local Autonomous Number

65004

Select IP Pool

FusionInternal_Floor2

Transits

FusionInternal TCP_SDA

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 114

Step 173.

Click the dropdown for the FusionInternal device and click on Add Interface.

Step 174.

Select the External Interface to be TenGigabitEthernet1/0/2 and click on all the Virtual Networks.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 115

Step 175.

Use the scroll bar on the right to expose the bottom section.

Ensure that the Interface is TenGigabitEthernet1/0/2 and number of VN is 4.

Step 176.

Click Add.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 116

Step 177.

A warning message will appear indicating the difference between all three types of Border(s). Click OK.

Step 178.

On the Provision > Fabric > Select Devices page, the letters CP|B should now be present on the Contol/BorderNode’s outline. The grey icon now has a blue outline, indicating the intention to add this device to the Fabric. Click Save.

Step 179.

This shall open a new Fly-out window to the right. You need to select Now and click on Apply in the end.

Step 180.

It shall show you that the Fabric device provisioning has initiated and after it has pushed the requisite configurations, it shall show up as that the Device has been updated to the Fabric Domain Successfully.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 117

Step 181.

It shall show up as follows once added successfully with the Fabric role.

Step 182.

Next, click on the devices EdgeNode2.dna.local and add it to the Fabric utilizing the option from the menu as Add to Fabric.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 118

Step 183.

On the Provision > Fabric > Select Devices page, the letter E should now be present on the EdgeNode2’s outline. The grey icon now has a blue outline, indicating the intention to add this device to the Fabric. Click Save.

Step 184.

This shall open a new Fly-out window DNAC scheduler. You need to select Now and click on Apply in the end.

Wait for the devices to be updated with their Fabric roles of Edge Nodes and should show up as solid blue finally.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 119

Step 189.

While we created the SDA Transit in earlier exercise, we defined the transit control plane as TransitControlPlane.dna.local. You will see the letter “T” at the outline of the device designated as Transit between Fabric SITE 1 and Fabric SITE 2

You have now successfully added all the required devices to the Fabric SITE 2.

This completes Exercise 10

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 120

Exercise 11: DNA Center Host Onboarding Host onboarding allows attachment of end-points to the fabric nodes. The host onboarding workflow will all allow you to authenticate, classify an end-point to a scalable group tag, and associate to a virtual network and IP Pool. Key steps to achieve this are as follows: 1. Authentication Template selection: DNA Center provides several predefined Authentication Templates to streamline the process of applying authentication mechanisms to your network. Selection of a template will automatically push the required configurations to the fabric edge. 2. Virtual Networks and subnet selection for unicast and multicast: Associate IP address pools to the Virtual Networks (VN). 3. Fabric SSID selection: For integrating wireless with SD-Access fabric. 4. Static port settings: This allows settings at port level.

Host Onboarding Part 1 Host Onboarding is two distinct steps – both located under the Provision > Fabric > Host Onboarding tab for a particular fabric domain. The first step is to select the Authentication template. These templates are predefined and pushed down to all devices that are operating as edge nodes. This step must be completed first. There are currently four pre-built authentication templates in DNA Center. 1. Easy Connect 2. Closed Mode 3. Open Authentication 4. No Authentication Each option will cause DNA Center to push down a separate configuration set to the edge nodes. Closed Mode is the most restrictive and provides the best device security posture of the four options. It will require connected end-host devices to authenticate to the network using 802.1x. If 802.1x fails, MAC Authentication Bypass (MAB) will be attempted. If MAB fails, the device will not be permitted any network access.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 121 Figure: Closed Mode Port Behavior

Step 190.

Navigate Provision > Fabric > Multisite_FabricDomain > SanJose_Fabric SITE1 > Floor1> Host Onboarding for Fabric SITE 1 select Closed Authentication.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 122

Step 191.

Click the Save button. The Modify Authentication Template dialogue box appears.

Step 192.

DNA Center will save the configuration template, although configuration is not yet pushed to the devices. Verify a Success notification appears indicated the Authentication template was saved.

Note: In DNA Center 1.0, when an authentication template was selected and the Save button was pressed, the configuration was immediately pushed down to the edge nodes. However, this meant that DNA Center was applying configuration multiples times, as the authentication template configuration was pushed down again each time an IP Address Pool and VN were bound together. This was an inefficient process that was corrected in DNA Center 1.1.

Step 193.

Next, navigate to Provision > Fabric > Multisite_FabricDomain > Milpitas_Fabric SITE1 > Floor2> Host Onboarding for Fabric SITE 2 and select Closed Authentication.

Click the Save button.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 123

Step 194.

The Modify Authentication Template dialogue box appears.

Step 195.

DNA Center will save the configuration template, although configuration is not yet pushed to the devices. Verify a Success notification appears indicated the Authentication template was saved.

Host Onboarding Part 2 (for CAMPUS VN) The second step (of Host Onboarding) is to bind the IP Address Pools to the Virtual Networks (VNs). At that point, these bound components are referred to as Host Pools. Multiple IP address pools can be associated with the same VN. However, an IP Address Pool should not be associated with multiple VNs. Doing so would allow communication between the VNs and break the first line of segmentation in SDA. The second step (of Host Onboarding) has a multi-step workflow that must be completed for each VN. 1. Select the Virtual Network 2. Select the desired Pool(s) 3. Select the traffic type 4. Enable Layer-2 Extension (optional)

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 124

Step 196.

From Provision > Fabric > Multisite_FabricDomain > SanJose_Fabric SITE1 > Floor1> Host Onboarding for Fabric SITE 1 select CAMPUS under Virtual Networks. The Edit Virtual Network dialog box appears for the CAMPUS VN.

Note: Notice that the available Virtual Networks are the VNs created during the Policy Application exercises. DNA Center configuration operates on a specific workflow that has a distinct order of operation: Design, Policy, Provision, and Assurance. This is also the order in which the applications are listed on DNA Center’s home page.

Step 197.

The IP Address pools created during the Design Application exercises are displayed. Select the next to ProductionFloor1.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 125

The boxes change to

Step 198. Step 199.

From the Choose Traffic dialog box, select Data for the Production Address Pool. By default, the Layer-2 Extension (Layer-2 Overlay) is enabled when an Address Pool associated with a VN. Leave these in the ON position.

Step 200.

Press Update. The Modify Authentication Template dialogue box appears.

Note: The Layer-2 extension is not absolutely required for this Wired-Only Lab Guide, although the current recommendation is to leave it On due to some changes on how ARP is forwarded in the Fabric. This extension is used primarily with Fabric Wireless, although might also be used in an environment where applications communicated without IP (and only Layer-2). Leaving this extension On will not necessarily impact things in the lab – except for some specifics on ARP – as the end hosts will have their packets forwarded by the Layer-3 process, not the Layer-2 process. It will also allow the ability to see the full SDA LISP configuration for both Layer-3 and Layer-2.

Step 201.

Select Now and click Apply.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 126

Step 202.

DNA Center will begin pushing configuration down to the devices. Verify a Success notification appears indicated the Segment was associated with the Virtual Network.

Step 203.

Verify the CAMPUS VN is now highlighted in blue. This indicates IP addresses have been bound with this VRF.

Step 204.

Next, navigate to Provision > Fabric > Multisite_FabricDomain > Milpitas_Fabric SITE1 > Floor2> Host Onboarding for Fabric SITE 2 select CAMPUS under Virtual Networks. The Edit Virtual Network dialog box appears for the CAMPUS VN.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 127

Step 205.

The IP Address pools created during the Design Application exercises are displayed. Select the next to StaffFloor2. The boxes change to

Step 206. Step 207.

From the Choose Traffic dialog box, select Data for the Production Address Pool. By default, the Layer-2 Extension (Layer-2 Overlay) is enabled when an Address Pool associated with a VN. Leave these in the ON position.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 128

Step 208.

Press Update. The Modify Authentication Template dialogue box appears

Step 209.

Select Now and click Apply.

Step 210.

DNA Center will begin pushing configuration down to the devices. Verify a Success notification appears indicated the Segment was associated with the Virtual Network.

Step 211.

Verify the CAMPUS VN is now highlighted in blue. This indicates IP addresses have been bound with this VRF.

Note: DNA Center begins creating SVI on the edge nodes beginning at Interface VLAN 1021 and moving upward for the number of VNs. It also creates the associated (Layer-2) VLAN 1021 and up. VLAN 3999 is provisioned as the critical VLAN and VLAN 4000 as the voice VLAN. VLANs 1002-1005 were originally intended for bridging with FDDI and Token Ring networks. Because of backward compatibility reasons in IOS and VTP, these VLAN numbers remain reserved and cannot be used or deleted. They will always appear on the devices.

This completes Exercise 11 © 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 129

Exercise 12: Exploring Provisioned Configuration Step 212.

Return to the desktop of the Jump Host. Open the terminal application SecureCRT, to view the devices console.

Step 213.

Double Click on the device name to open the console for the particular device. Login with Credentials: Username: cisco Password: cisco Enable password: cisco Every time DNAC pushes any configuration to the device or just run the sync with the device, a console message should indicate the DNA Center logged into the VTY lines of the device using the Operator user.

▪ ▪ ▪

DNA Center’s IP Address The method of access The username used for access

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 130

Exploring LISP Configurations – CLI Step 214.

Display the LISP configuration on Co-located Control and Border node “CP-BN_FS1” and “CP-BN_FS2”

CP-BN_FS1# show running-config | section lisp router lisp locator-table default locator-set rloc_609d2e7e-0b01-4456-a10c-18b15aadba0e IPv4-interface Loopback0 priority 10 weight 10 auto-discover-rlocs exit-locator-set ! prefix-list Building_11/Floor_1_Multisite_FabricDomain_list1 172.16.101.0/24 exit-prefix-list ! service ipv4 Overlay Data plane (IPv4, Layer-3 LISP). Border encapsulation vxlan Encapsulates LISP data packets using the VXLAN GPO itr map-resolver 192.168.255.4 etr map-server 192.168.255.4 key 7 0106050D format etr map-server 192.168.255.4 proxy-reply etr Overlay Policy plane (IPv4, Layer-3 LISP). Border sgt Embeds SGT information in the VXLAN GPO header no map-cache away-eids send-map-request proxy-etr Overlay border configuration (IPv4, Layer-3 LISP) proxy-itr 192.168.255.4 map-server Overlay control plane configuration (IPv4, Layer-2 LISP) map-resolver exit-service-ipv4 ! service ethernet database-mapping limit dynamic 5000 itr map-resolver 192.168.255.4 itr etr map-server 192.168.255.4 key 7 14021102 etr map-server 192.168.255.4 proxy-reply etr map-server map-resolver exit-service-ethernet ! instance-id 4099 LISP Instance for CAMPUS VN remote-rloc-probe on-route-change service ipv4 eid-table vrf CAMPUS

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 131 database-mapping 172.16.101.0/24 locator-set rloc_609d2e7e-0b01-4456-a10c18b15aadba0e proxy Import ONLY the map-cache 0.0.0.0/0 map-request SDA Prefixes import prefix-list Building_11/Floor_1_Multisite_FabricDomain_list1 site-registration learned from BGP route-import map-cache bgp 65222 route-map permit-all-eids route-import database bgp 65222 route-map DENY-CAMPUS locator-set rloc_609d2e7eVPNv4 into the LISP 0b01-4456-a10c-18b15aadba0e proxy map-cache. itr map-resolver 192.168.255.4 prefix-list Building_11/Floor_1_Multisite_FabricDomain_list1 itr map-resolver 192.168.255.9 Overlay control configuration for Transit Control etr map-server 192.168.255.9 key 7 08344F47 etr map-server 192.168.255.9 proxy-reply Plane route-export site-registrations distance site-registrations 250 Export LISP Site registrations to the global RIB with an AD of 250 import database site-registration locator-set rloc_609d2e7e-0b01-4456-a10c-18b15aadba0e map-cache site-registration exit-service-ipv4 ! exit-instance-id ! site site_uci description map-server configured from apic-em authentication-key 7 0011100F eid-record instance-id 4099 0.0.0.0/0 accept-more-specifics eid-record instance-id 4099 172.16.101.0/24 accept-more-specifics eid-record instance-id 8188 any-mac LISP Map Server and exit-site Map Resolver configuration ! ipv4 locator reachability exclude-default ipv4 source-locator Loopback0 exit-router-lisp redistribute lisp metric 10 snmp-server enable traps lisp

Note: The LISP configurations of CP_BN_FS1 and CP-BN_FS2 will be identical.

Exploring BGP Configurations – CLI Step 215.

Display the BGP configuration of Co-located Control Border Node for Fabric SITE 2.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 132

CP-BN_FS2#show running-config | section router bgp router bgp 65004 bgp router-id interface Loopback0 bgp log-neighbor-changes bgp graceful-restart eBGP Adjacency with external device using the neighbor 192.168.30.14 remote-as 65444 global routing table. neighbor 192.168.30.14 update-source Vlan3004 neighbor 192.168.255.9 remote-as 65540 eBGP Adjacency Transit Control Plane Node neighbor 192.168.255.9 ebgp-multihop 255 neighbor 192.168.255.9 update-source Loopback0 ! address-family ipv4 network 192.168.255.8 mask 255.255.255.255 Redistribute routes from LISP into Global Routing Table redistribute lisp metric 10 And advertise to borders with a metric of 10 neighbor 192.168.30.14 activate neighbor 192.168.30.14 weight 65535 neighbor 192.168.30.14 advertisement-interval 0 no neighbor 192.168.255.9 activate exit-address-family ! address-family vpnv4 bgp aggregate-timer 0 Use a single VPNV4 connection for all VRFs in the SDA neighbor 192.168.255.9 activate Fabric. iBGP adjacency with Transit Control plane neighbor 192.168.255.9 send-community both exit-address-family ! address-family ipv4 vrf CAMPUS bgp aggregate-timer 0 Create route in BGP to satisfy aggregatenetwork 172.16.200.1 mask 255.255.255.255 address checks. aggregate-address 172.16.200.0 255.255.255.0 summary-only Advertise an aggregate route only to avoid redistribute lisp metric 10 exposing the /32 host routes externally neighbor 192.168.30.6 remote-as 65444 Redistribute CAMPUS VRF routes to neighbor 192.168.30.6 update-source Vlan3002 borders with a metric of 10 neighbor 192.168.30.6 activate neighbor 192.168.30.6 weight 65535 exit-address-family ! address-family ipv4 vrf DEFAULT_VN Do not wait the default twenty (20) seconds to advertise bgp aggregate-timer 0 aggregate-route. Immediately check for aggregate routes and redistribute lisp metric 10 suppress specific routes neighbor 192.168.30.10 remote-as 65444 neighbor 192.168.30.10 update-source Vlan3003 neighbor 192.168.30.10 activate neighbor 192.168.30.10 weight 65535 exit-address-family ! address-family ipv4 vrf GUEST © 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 133 bgp aggregate-timer 0 redistribute lisp metric 10 neighbor 192.168.30.2 remote-as 65444 neighbor 192.168.30.2 update-source Vlan3001 neighbor 192.168.30.2 activate neighbor 192.168.30.2 weight 65535 exit-address-family

Step 216.

Display the Loopback Interfaces on CP-BN_FS2 node.

CP-BN_FS2# show running-config | section interface Loopback ip radius source-interface Loopback0 interface Loopback0 description Fabric Node Router ID ip address 192.168.255.8 255.255.255.255 interface Loopback1022 description Loopback Border vrf forwarding CAMPUS ip address 172.16.200.1 255.255.255.255 ! Output Omitted for brevity

Note: The aggregate-address command in BGP requires a least one member of the aggregate space to be present in the BGP routing table for the aggregate to be advertised to a neighbor. To inject routes into BGP, the network statement is used. For this network statement to be of any real value and for BGP to function correctly, the router advertising this information must truly have a way to reach the destination. This means that the prefix must already be present in the routing table of the device. The Loopback interfaces on a distributed control plane node in SDA are used to satisfy the BGP configuration checks for the network statements and the aggregate-route commands.

Step 217.

Display the IP community list on CP-BN_FS2 node.

CP-BN_FS2# show running-config | section ip community-list ip community-list 1 permit 655370 ip community-list 2 permit 65537

Match routes from control plane node: CP-BN_FS1 and Transit Control Plane

Exploring BGP Routes – CLI Multi-protocol BGP can be a very complex technology. The implementation in Software-Defined Access adds additional factors such as redistribution, LISP, and both eBGP and iBGP sessions. Prefixes from LISP are redistributed into a VRF-Aware instance of BGP. Routing information allowing the BGP adjacencies

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 134 between Loopbacks is carried by the underlay routing protocol IS-IS. It can be easy to get lost in the complexity of the solution. One of DNA Center’s goal is to abstract these complexities. However, once the network device’s CLI is engage for any reason, that abstraction principle begins to fade away. At minimum, it is critical to understand the configuration components that bring the BGP sessions UP. It is also critical to understand what information, or more accurately which NLRI (Network Layer Reachability Information), is being exchange at each point in the BGP chain from control plane node to border nodes to fusion routers. Note: Some knowledge of BGP is necessary to understand the following show command outputs. This lab guide is not meant to teach multi-protocol BGP, but rather to help explain the output of configuration that DNA Center has provisioned to the Fabric devices. While progressing through the various BGP verification commands, many sections are meant to provide a Before and After view of the BGP and IP routing tables. This is to promote understanding of the transparent changes that are occurring. At minimum, to understand the Before and After in these sections, the show ip route vrf CAMPUS and show ip route vrf GUEST commands should be run on the various BGP-speaking devices.

Step 218.

Display the routes known to vrf CAMPUS on CP-BN-FS2. CP-BN_FS2# show ip route vrf CAMPUS Routing Table: CAMPUS Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP a - application route + - replicated route, % - next hop override, p - overrides from PfR Gateway of last resort is not set 172.16.0.0/16 is variably subnetted, 3 subnets, 2 masks 172.16.101.0/24 [20/10] via 192.168.255.9, 1d12h 172.16.200.0/24 [200/0], 1d11h, Null0 172.16.200.1/32 is directly connected, Loopback1022 192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks C 192.168.30.4/30 is directly connected, Vlan3002 L 192.168.30.5/32 is directly connected, Vlan3002 B B C

iBGP Routes have default AD of 200. Next hop is Null0 because of the aggregate-route. Routes are learned from neighbor 192.168.255.4 (CP-BN_FS1)

This completes Exercise 12 © 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 135

Exercise 13: Fusion Routers and Configuring Fusion Internal Router The generic term fusion router comes from the MPLS world. The basic concept is that the fusion router is usually aware of the prefixes available inside each VPN (VRF), either because of static routing configuration or through route peering, and can therefore fuse some of these routes together. A fusion router’s responsibilities are to route traffic using separate VRFs and to route traffic to and from a VRF to a shared pool of resources such as DHCP servers, DNS servers, and the WLC. A fusion router has a number of support requirements. It must support: 1. Multiple VRFs 2. 802.1q tagging (VLAN Tagging) 3. Sub-interfaces (when using a router) 4. BGPv4 and specifically the MP-BGP extensions Deployment Note: While it is feasible to use a switch as a fusion router, switches add additional complexity, as generally only the high-end chassis models support sub-interfaces. Therefore, on a fixed configuration model such as a Catalyst 9300, an SVI must be created on the switches and added to VRF forwarding definition. This abstracts the logical concept of a VRF even further through logical SVIs. A Layer-2 trunk is used to connect to the border node, which itself is likely configured for a Layer-3 handoff using a sub-interface. To reduce unnecessary complexity, an Integrated Services Router (ISR) is used in the lab as the fusion router.

Because the fusion router is outside the SDA fabric, it is not specifically managed (for Automation) by DNA Center. Therefore, the configuration of a fusion router will always be manual. Future release and development may reduce or eliminate the need for a fusion router. FusionInternal will be used to allowed end-hosts in Virtual Networks of the SDA Fabric to communicate with shared services Note: It is also possible, with minimal additional configuration, to allow hosts in separate VNs to communicate with each other. This is outside the scope of this lab guide and not required for SDA.

This is a multi-step workflow performed on the CLI of FusionInternal. 1. Create the Layer-3 connectivity between Control-BorderNode and FusionInternal. 2. Use BGP to extend the VRFs from the Control-BorderNode to FusionInternal. 3. Use VRF leaking to share routes between the VRFs on FusionInternal. 4. Distribute the routes between the VRFs back to the BorderNode.

Figure: Route Leaking Workflow

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 136

Border Automation & Fusion Router Configuration Variations – BorderNode and FusionInternal Critical Lab Guide Note: The configuration elements provisioned during your run-through are likely to be different. Please be sure not to copy and paste from the Lab Guide unless instructed specifically to do so. Be aware of what sub-interface is forwarding for which VRF and what IP address is assigned to that subinterface on your particular lab pod during your particular lab run-through. The fusion router’s configuration is meant to be descriptive in nature, not prescriptive. There are six possible varieties in how DNA Center can provision the sub-interfaces and VRFs. This means there are six variations in how FusionInternal needs to be configured to match ControlBorderNode.

Identifying the Variation – BorderNode and FusionInternal When following the instructions in the lab guide, DNA Center will provision three SVIs on ControlBorderNode beginning with Vlan 3001 through Vlan 3003. These SVIs will be assigned an IP address with a /30 subnet mask (255.255.255.252) and will always use the lower number (the odd number address) of the two available addresses. DNA Center will vary which SVI is forwarding for which VRF and the Global Routing Table (GRT). To understand which explanatory graphic and accompanying configuration text file to follow, identify the order of the VRF/GRT that DNA Center has provisioned on the sub-interfaces.

Figure: BorderNode and FusionInternal Interfaces © 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 137

In the example above, Vlan3001 is forwarding for GUEST VRF, Vlan3002 is forwarding for the CAMPUS VRF, and Vlan3004 is forwarding for the GRT (also known as the INFRA_VN).

Note: Please be sure to use the appropriate VLAN and Sub-interfaces and do not directly copy and paste from the lab guide unless instructed directly and specifically to do so.

Creating Layer-3 Connectivity

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 138 The first task is to allow IP connectivity from the Control-BorderNode (CP-BN_FS2) to FusionInternal. This must be done for each Virtual Network that requires connectivity to shared services. DNA Center has automatically configured the Control-BorderNode (CP-BN_FS2) in previous exercises. Using this information, a list of interfaces and IP address can be planned on the FusionInternal. Figure: Interfaces to be Manually Configured – FusionInternal

Interface

VRF

IP Address

VLAN

GigabitEthernet 0/0/2.3001

GUEST

192.168.30.2

3001

GigabitEthernet 0/ 0/2.3002

CAMPUS

192.168.30.6

3002

GigabitEthernet 0/0/2.3004

INFRA_VN/GRT

192.168.30.14

3004

GigabitEthernet 0/0/2

N/A

192.168.37.7

N/A

However, to configure an interface to forward for a VRF forwarding instance, the VRF must first be created. Before creating the VRFs on FusionInternal, it is important to understand the configuration elements of a VRF definition. The most important portion of a VRF configuration – other than the case-sensitive name – is the route-target (RT) and the route-distinguisher (RD). Note: In older versions of IOS such as IOS 12.x, VRFs were not address-family aware. They were supported for IPv4 only and used a different syntax, ip vrf , for configuration. It was mandatory to define the RT and RD in these older code versions. In current versions of code, it is possible to create a VRF without the RT and RD values as long as the address-family is defined. This method (of not defining the RT and RD) is often used when creating VRFs for VRF-Lite deployments that do not require route leaking.

About Route Distinguishers (RD) A route distinguisher makes an IPv4 prefix globally unique. It distinguishes one set of routes (in a VRF) from another. This is particularly critical when different VRFs contain overlapping IP space. A route distinguisher is an eight-octet/eight-byte (64-bit) field that is prepended to a four-octet/four-byte (32bit) IPv4 prefix. Together, these twelve octets/twelve bytes (96 bits) create the VPNv4 address. Additional information can be found in RFC 4364. There are technically three supported formats for the route distinguisher, although they are primarily cosmetic in difference. The distinctions are beyond the scope of this guide.

About Route Targets (RT) Route targets, in contrast, are used to share routes among VRFs. While the structure is similar to the route distinguisher, a route target is actually a BGP Extended-Community Attribute. The route target defines which routes are imported and exported into the VRFs. Additional information can be found in

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 139 RFC 4360. Many times, for ease of administration, the route-target and route-distinguisher are configured as the same number, although this is not a requirement. It is simply a configuration convention that reduces an administrative burden and provides greater simplicity. This convention is used in the configurations provisioned by DNA Center. The RD and RT will also match the LISP Instance-ID.

Putting It All Together For route leaking to work properly, FusionInternal must have the same VRFs configured as ControlBorderNode (CP-BN-FS2). In addition, the route-distinguisher (RD) and route-target (RT) values must be the same. These have been auto-generated by DNA Center. The first step is to retrieve those values. One the VRFs are configured in FusionInternal, the interfaces can be configured to forward for a VRF.

Step 219. Step 220. Step 221.

Return to the SecureCRT application. Open the consoles for CP-BN_FS2 and FusionInternal (if not opened already) Display and then copy the DNA Center provisioned VRFs, RTs, and RDs shown on colocated CP-BN_FS2. CP-BN_FS2# show running-config | section vrf definition vrf definition CAMPUS rd 1:4099 ! address-family ipv4 route-target export 1:4099 route-target import 1:4099 exit-address-family vrf definition DEFAULT_VN rd 1:4098 ! address-family ipv4 Copy and Paste to Fusion Internal route-target export 1:4098 route-target import 1:4098 exit-address-family vrf definition GUEST rd 1:4100 ! address-family ipv4 route-target export 1:4100 route-target import 1:4100 exit-address-family ! Output omitted for brevity

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 140 Note: The management vrf - Mgmt-intf is not part of the route-leaking process. It can be ignored as part of this exercise.

Step 222.

On the console of FusionInternal, paste the VRF configuration that was copied from CP-BN_FS2 node. Copying and pasting is required. The RDs and RTs must match exactly.

vrf definition CAMPUS rd 1:4099 ! address-family ipv4 route-target export 1:4099 route-target import 1:4099 exit-address-family vrf definition DEFAULT_VN rd 1:4098 ! address-family ipv4 route-target export 1:4098 route-target import 1:4098 exit-address-family vrf definition GUEST rd 1:4100 ! address-family ipv4 route-target export 1:4100 route-target import 1:4100 exit-address-family

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 141

Step 223.

Create the Layer-3 sub-interface that will be used for CAMPUS VRF

configure terminal interface GigabitEthernet0/2.3001 Step 224.

Use a meaningful description to help with future troubleshooting.

description FusionInternal to BorderNode for VRF GUEST Step 225.

Use VLAN 3001 for sub-interface.

encapsulation dot1Q 3001 Step 226.

Add the interface to the VRF forwarding instance GUEST.

vrf forwarding GUEST Step 227.

Configure the /30 IP Address that corresponds with CP-BN_FS1’s interface.

ip address 192.168.30.2 255.255.255.252 Step 228.

Exit out of sub-interface configuration mode

Exit

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 142

Note: The creation of the sub-interface on a physical interface that already has exiting IP configuration will cause the IS-IS adjacency to bounce. This is expected.

Step 229.

Create the Layer-3 sub-interface that will be used for CAMPUS VRF. Once completed, exit sub-interface configuration mode. Use the following information: • Description: FusionInternal to BorderNode for VRF CAMPUS • VLAN: 3002 • VRF Instance: CAMPUS • IP Address: 192.168.30.6/30

Step 230.

Create the final Layer-3 sub-interface used for the Global Routing table (INFRA_VN). Once completed, exit global configuration mode completely. Use the following information: • Description: FusionInternal to Control-BorderNode (CP-BN_FS2) for GRT • VLAN: 3004 • VRF Instance: Not-Applicable • IP Address: 192.168.30.14/30

Step 231.

Ping the CP-BN_FS2 from the FusionInternal using GUEST VRF.

ping vrf GUEST 192.168.30.1

Step 232.

Ping the CP-BN_FS2 from the FusionInternal using CAMPUS VRF

ping vrf CAMPUS 192.168.30.5

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 143

Step 233.

Ping the CP-BN_FS2 from the FusionInternal using the Global routing table and a sub-interface.

ping 192.168.37.3

Extending the VRFs to the Fusion Internal

BGP is used to extend the VRFs to the FusionInternal router. As with the sub-interface configuration, DNA Center has fully automated CP_BN’s BGP configuration. Note: The BGP Adjacencies created between a border node and fusion router us e the IPv4 Address Family (not the VPNv4 Address family). Note, however, the adjacencies will be formed over a VRF session.

Step 234.

Create the BGP process on FusionInternal. Use the corresponding Autonomous-System number automated by DNA Center on the CP-BN-FS2.

configure terminal

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 144

router bgp 65444 Step 235.

Define the neighbor and its corresponding AS Number. This neighbor should use the IP address associated with the GRT sub-interface.

neighbor 192.168.30.13 remote-as 65004 Step 236.

Define the update-source to use the GRT sub-interface.

neighbor 192.168.30.13 update-source GigabitEthernet0/2.3004 Step 237.

Activate the exchange of NLRI with the BorderNode.

address-family ipv4 neighbor 192.168.30.13 activate Step 238.

Add a network statement to advertise the DHCP and DNS Servers' subnet.

network 198.18.133.0 Step 239.

Add a network statement to advertise the WLC subnet. network 192.168.51.0

Step 240.

Exit IPv4 Address-Family.

exit-address-family Step 241.

Enter IPv4 Address-Family for vrf GUEST.

address-family ipv4 vrf GUEST Step 242.

Define the neighbor and its corresponding AS Number. This neighbor should use the IP address associated with the GUEST sub-interface.

neighbor 192.168.30.1 remote-as 65004 Step 243.

Define the update-source to use the GUEST sub-interface.

neighbor 192.168.30.1 update-source GigabitEthernet0/2.3001 Step 244.

Activate the exchange of NLRI with the CP_BN_FS2 for vrf GUEST.

neighbor 192.168.30.1 activate © 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 145

Step 245.

Exit IPv4 Address-Family.

exit-address-family Step 246.

Enter IPv4 Address-Family for vrf CAMPUS.

address-family ipv4 vrf CAMPUS Step 247.

Define the neighbor and its corresponding AS Number. This neighbor should use the IP address associated with the CAMPUS sub-interface.

neighbor 192.168.30.5 remote-as 65004 Step 248.

Define the update-source to use the CAMPUS sub-interface.

neighbor 192.168.30.5 update-source GigabitEthernet0/2.3002 Step 249.

Activate the exchange of NLRI with the CP-BN_FS2 for vrf CAMPUS.

neighbor 192.168.30.5 activate Step 250.

Exit IPv4 Address-Family.

exit-address-family Step 251.

Exit BGP configuration mode and out of global configuration mode completely.

exit end

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 146

Step 252.

On FusionInternal, verify that three (3) BGP Adjacencies come up. There should be a BGP adjacency for each VRF and for the GRT.

Step 253.

On CP-BN_FS2 node, verify that three (3) BGP Adjacencies come up. There should be a BGP adjacency for each VRF and for the GRT.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 147

Optional - Extending the VRFs – Verification Step 254.

Display the IPv4 BGP Adjacency, messages, and prefix advertisements on CP-BN-FS2. CP-BN_FS2# show ip bgp ipv4 unicast summary BGP router identifier 192.168.255.8, local AS number 65004 BGP table version is 3, main routing table version 3 2 network entries using 496 bytes of memory 2 path entries using 272 bytes of memory 2/2 BGP path/bestpath attribute entries using 560 bytes of memory 2 BGP AS-PATH entries using 48 bytes of memory 1 BGP community entries using 24 bytes of memory 1 BGP extended community entries using 24 bytes of memory 0 BGP route-map cache entries using 0 bytes of memory 0 BGP filter-list cache entries using 0 bytes of memory BGP using 1424 total bytes of memory BGP activity 5/0 prefixes, 6/0 paths, scan interval 60 secs Neighbor V 192.168.30.14 4

Step 255.

AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 65444 25 25 3 0 0 00:19:01 1

Display the IPv4 BGP Adjacency, messages, and prefix advertisements on FusionInternal. FusionInternal# show ip bgp ipv4 unicast summary BGP router identifier 192.168.255.7, local AS number 65444 BGP table version is 5, main routing table version 5 2 network entries using 496 bytes of memory 2 path entries using 272 bytes of memory 2/2 BGP path/bestpath attribute entries using 560 bytes of memory 2 BGP AS-PATH entries using 64 bytes of memory 1 BGP extended community entries using 24 bytes of memory 0 BGP route-map cache entries using 0 bytes of memory 0 BGP filter-list cache entries using 0 bytes of memory BGP using 1416 total bytes of memory BGP activity 4/0 prefixes, 4/0 paths, scan interval 60 secs

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 148 Neighbor V 192.168.30.13 4

Step 256.

AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 65004 23 22 5 0 0 00:17:01 1

Display the VPNv4 BGP Adjacency, messages, and prefix advertisements on CP-BN_FS2. CP-BN_FS2# show ip bgp vpnv4 all summary BGP router identifier 192.168.255.8, local AS number 65004 BGP table version is 8, main routing table version 8 3 network entries using 768 bytes of memory 4 path entries using 544 bytes of memory 3/3 BGP path/bestpath attribute entries using 888 bytes of memory 2 BGP AS-PATH entries using 48 bytes of memory 1 BGP community entries using 24 bytes of memory 1 BGP extended community entries using 24 bytes of memory 0 BGP route-map cache entries using 0 bytes of memory 0 BGP filter-list cache entries using 0 bytes of memory BGP using 2296 total bytes of memory BGP activity 5/0 prefixes, 6/0 paths, scan interval 60 secs Neighbor V 192.168.30.2 4 192.168.30.6 4 192.168.30.10 4 192.168.255.9 4

© 2019 Cisco Systems Inc.

AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 65444 11 11 8 0 0 00:06:50 0 65444 14 16 8 0 0 00:09:27 0 65444 0 0 1 0 0 never Idle 65540 4462 4456 8 0 0 2d19h 2

Solutions Readiness Engineering

P a g e | 149

Step 257.

Display the VPNv4 BGP Adjacency, messages, and prefix advertisements on FusionInternal. FusionInternal# show ip bgp vpnv4 all summary BGP router identifier 192.168.255.7, local AS number 65444 BGP table version is 3, main routing table version 3 2 network entries using 512 bytes of memory 2 path entries using 272 bytes of memory 4/2 BGP path/bestpath attribute entries using 1184 bytes of memory 2 BGP AS-PATH entries using 64 bytes of memory 1 BGP extended community entries using 24 bytes of memory 0 BGP route-map cache entries using 0 bytes of memory 0 BGP filter-list cache entries using 0 bytes of memory BGP using 2056 total bytes of memory BGP activity 4/0 prefixes, 4/0 paths, scan interval 60 secs Neighbor V 192.168.30.1 4 192.168.30.5 4

© 2019 Cisco Systems Inc.

AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 65004 14 13 3 0 0 00:08:57 0 65004 18 16 3 0 0 00:11:34 2

Solutions Readiness Engineering

P a g e | 150

Step 258.

Verify the DHCP/DNS Servers’ subnet and WLC subnet is learned on CP-BN_FS2 in the Global Routing Table. CP-BN_FS2# show ip route bgp Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP a - application route + - replicated route, % - next hop override, p - overrides from PfR Gateway of last resort is 192.168.8.6 to network 0.0.0.0 B 198.18.133.0/24 [20/0] via 192.168.30.14, 00:17:09

Step 259.

Display the routes known to vrf CAMPUS on the CP-BN_FS2 CP-BN_FS2# show ip route vrf CAMPUS Routing Table: CAMPUS Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP a - application route + - replicated route, % - next hop override, p - overrides from PfR Gateway of last resort is not set 172.16.0.0/16 is variably subnetted, 3 subnets, 2 masks B 172.16.101.0/24 [20/10] via 192.168.255.9, 1d17h B 172.16.200.0/24 [200/0], 1d16h, Null0

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 151 C

172.16.200.1/32 is directly connected, Loopback1022 192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks C 192.168.30.4/30 is directly connected, Vlan3002 L 192.168.30.5/32 is directly connected, Vlan3002

Step 260.

Display the routes known to vrf CAMPUS on FusionInternal. FusionInternal# show ip route vrf CAMPUS Routing Table: CAMPUS Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP a - application route + - replicated route, % - next hop override, p - overrides from PfR Gateway of last resort is not set

B B C L B L

172.16.0.0/24 is subnetted, 2 subnets The Staff and Production IP Address 172.16.101.0 [20/0] via 192.168.30.5, 03:40:23 Pools are known via eBGP from 172.16.200.0 [20/0] via 192.168.30.5, 05:44:45 BorderNode. 192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks 192.168.30.4/30 is directly connected, GigabitEthernet0/0/2.3002 192.168.30.6/32 is directly connected, GigabitEthernet0/0/2.3002 198.18.133.0/24 is variably subnetted, 2 subnets, 2 masks 198.18.133.0/24 is directly connected, 05:07:15, GigabitEthernet0/0/1 198.18.133.7/32 is directly connected, GigabitEthernet0/0/1

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 152

Note: The command show bgp all or alternatively show ip bgp all can be used to see specifically which prefixes (NLRI) are learned and what BGP address-family they are learned by.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 153

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 154

Use VRF Leaking to Share Routes on FusionInternal

FusionInternal has routes to the SDA Prefixes learned from Control-BorderNode (CP-BN-FS2). It also has routes to its directly connected subnets where the DHCP/DNS servers and WLC reside. Now that all these routes are in the routing tables on FusionInternal, they can be used for fusing the routes (route leaking). Route-maps are used to specify which routes are leaked between the Virtual Networks. These route-maps need to match very specific prefixes. This can be best accomplished by first defining a prefix-list and then referencing that prefix-list in a route-map. Prefix-lists are similar to ACLs in that they can be used to match something. Prefix-lists are configured to match an exact prefix length, a prefix range, or a specific prefix. Once configured, the prefix-list can be referenced in the route-map. Together, prefix-lists and route-maps provides the deep granularity necessary to ensure the correct NLRI are advertised to Control-BorderNode (CP-BN-FS2). Note: The following prefix-lists and route-maps can be safely copied and pasted.

Step 261.

On FusionInternal, configure a two-line prefix-list that matches the /24 CAMPUS VRF subnets. Name the prefix list CAMPUS_VRF_NETWORK.

configure terminal ip prefix-list CAMPUS_VRF_NETWORK seq 5 permit 172.16.101.0/24 ip prefix-list CAMPUS_VRF_NETWORK seq 10 permit 172.16.200.0/24

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 155

Step 262.

Configure a two-line prefix-list that matches the DHCP/DNS Servers’ and WLC’s subnets. Name the prefix list SHARED_SERVICES_NETWORK.

configure terminal ip prefix-list SHARED_SERVICES_NETWORK seq 5 permit 198.18.133.0/24 end

Step 263.

Route-maps can now be configured to match the specific prefixes referenced in the prefix lists. Configure a route-map to match the CAMPUS_VRF_NETWORK prefix list. Name the route-map CAMPUS_VRF_NETWORK.

configure terminal route-map CAMPUS_VRF_NETWORK permit 10 match ip address prefix-list CAMPUS_VRF_NETWORK end

Step 264.

Configure a route-map to match the SHARED_SERVICES_NETWORK prefix list. Name the route-map SHARED_SERVICES_NETWORK.

configure terminal route-map SHARED_SERVICES_NETWORK permit 10 match ip address prefix-list SHARED_SERVICES_NETWORK end

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 156

Use VRF Leaking to Share Routes and Advertise to BorderNode

About Route Leaking Route leaking is done by importing and exporting route-maps under the VRF configuration. VRFs should export prefixes belonging to itself using a route-map. The VRF should also import desired routes used for access to shared services using a route-map. Using the route-map SHARED_SERVICES_NETWORK with the import command will permit only the shared services subnets to be leaked to the VRFs. This will allow the End-Hosts in the Fabric to communicate with the DHCP/DNS Servers and the WLC, but not allow inter-VRF communication. Figure: VRF Leaking for Shared Service Access Only

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 157 Using the route-target import command will allow for inter-VRF communication. Inter-VRF communication is beyond the scope of this lab guide, and uncommon in campus production networks.

In production, rather than permitting inter-VRFs communication which adds additional complexity, the entire non-Guest prefix space (IP Address Pools) will be associated with a single VN (VRF). Scalable Group Tags (SGTs) are then used to permit or deny communication between end-hosts. This is a simpler solution that also provides more visibility and granularity into which hosts can communicate with each other. Note: When using route-maps, only a single import and export command can be used. This make sense, as a route-map is used for filtering the ingress and egress routes against the element matched in that route-map. Route-maps are used when finer control is required over the routes that are imported and exported from a VRF than the control that is provided by the import route-target and export route-target commands. If route-maps are not used to specify a particular set of prefixes, VRF leaking can be performed by importing and exporting route-targets. Using route-targets in this way can export all routes from a particular VRF instance and imports all routes from another VRF instance. It is less granular and more often used in MPLS. Route-target allows for multiple import and export commands to be applied, as they are used without any filtering mechanism – such as a route-map.

Step 265.

Configure the CAMPUS VRF for route leaking using route-maps. The VRF should export its own routes and import the Shared Services Networks only.

configure terminal vrf definition CAMPUS address-family ipv4 import ipv4 unicast map SHARED_SERVICES_NETWORK export ipv4 unicast map CAMPUS_VRF_NETWORK exit-address-family exit end

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 158

Note: The VRF leaking configuration for CAMPUS VRF can safely be copied and pasted .

Step 266.

Please be patient. BGP converges slowly, but reliably. It may take a few minutes for the routes to propagate. If more than five minutes have passed, please check spelling and Capitalizations for the prefix-lists, route-maps, and VRFs. If routes are still not propagating, contact your instructor.

Note: In Cisco software, import actions are triggered when a new routing update is received or when routes are withdrawn. During the initial BGP update period, the import action is postponed allowing BGP to convergence more quickly. Once BGP converges, incremental BGP updates are evaluated immediately and qualified prefixes are imported as they are received.

Route Leaking – Validation (Control-Border Nodes) Route leaking will allow the shared services routes to be imported to the VRF forwarding tables on FusionInternal. These routes will then be advertised via eBGP to Control-BorderNode(CP-BN_FS2). CPBN_FS2 will use the iBGP VPNv4 adjacency to advertise these routes to the control plane nodes. The VPNv4 transport session carries all the VRFs information together and internal BGP process keep track of which prefix is associated with which VRF. Once these routes reach the control plane nodes, those routers will make the decision on which routes are redistributed in and out of LISP and BGP. Once imported into LISP, the routes are available to the end-hosts in the EID-space through their corresponding edge nodes.

Step 267.

Display the routes known to vrf CAMPUS on BorderNode. CP-BN_FS2# show ip route vrf CAMPUS Routing Table: CAMPUS Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 159 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP a - application route + - replicated route, % - next hop override, p - overrides from PfR Gateway of last resort is not set 172.16.0.0/16 is variably subnetted, 3 subnets, 2 masks 172.16.101.0/24 [20/10] via 192.168.255.9, 1d17h 172.16.200.0/24 [200/0], 1d17h, Null0 172.16.200.1/32 is directly connected, Loopback1022 192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks C 192.168.30.4/30 is directly connected, Vlan3002 L 192.168.30.5/32 is directly connected, Vlan3002 B 198.18.133.0/24 [20/0] via 192.168.30.6, 00:00:18 B B C

Optional - Route Leaking – Validation (Edge Nodes) The end-hosts attached to the Edge Nodes have not yet joined the network. This will be addressed in the next exercise on Fabric DHCP. Therefore, the Edge Nodes have not registered any prefixes with the control plane nodes. Using lig, it is possible to verify that a packet sourced from the EID-space sent to the shared service subnets would be encapsulated and sent via the overlay.

LISP Forwarding Logic – Part 1 When a packet is sent from an end-host to an edge node, the edge node must determine if the packet is to be LISP (VXLAN) encapsulated. If the packet is encapsulated, it is sent via the overlay. If not, it is forwarded natively via the underlay. To be eligible for encapsulation, a packet must be sourced from the EID-space. The edge node will then look for a default route or a null route in its routing table. If the end-

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 160 host packet matches either of those routes, it is eligible for encapsulation, and the edge node will query the control plane node for the how to reach the destination. When an edge node receives information back from the control plane node after a query, it is stored in the LISP map-cache. Note: If the control plane node does not know not know how to reach a destination, it will reply to the edge node with a negative map reply (NMR). This triggers the edge node to forward the packet natively. However, if the use-petr command is configured in the LISP configuration on the edge node, an NMR triggers the edge node to send the packet via the overlay to a default border

Step 268.

Verify the LISP map-cache for CAMPUS VRF (LISP Instance 4099). EdgeNode2# show ip lisp map-cache instance-id 4099 LISP IPv4 Mapping Cache for EID-table vrf CAMPUS (IID 4099), 3 entries 0.0.0.0/0, uptime: 3d02h, expires: never, via static-send-map-request Negative cache entry, action: send-map-request 172.16.200.0/24, uptime: 2d00h, expires: never, via dynamic-EID, send-map-request Negative cache entry, action: send-map-request 192.0.0.0/2, uptime: 05:35:46, expires: 00:06:23, via map-reply, forward-native Encapsulating to proxy ETR

This completes Exercise 13

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 161

Exercise 14: Exploring Transit Control Plane Configuration Step 269.

Display the BGP adjacencies with CP-BN_FS1 and CP-BN_FS2 TransitControlPlane# show running-config | section router bgp router bgp 65540 bgp router-id interface Loopback0 bgp log-neighbor-changes bgp graceful-restart neighbor 192.168.255.4 remote-as 65222 neighbor 192.168.255.4 ebgp-multihop 255 Adjacency with CP-BN_FS1 neighbor 192.168.255.4 update-source Loopback0 neighbor 192.168.255.8 remote-as 65004 Adjacency with CP-BN_FS2 neighbor 192.168.255.8 ebgp-multihop 255 neighbor 192.168.255.8 update-source Loopback0 ! address-family ipv4 no neighbor 192.168.255.4 activate no neighbor 192.168.255.8 activate exit-address-family ! address-family vpnv4 neighbor 192.168.255.4 activate neighbor 192.168.255.4 send-community both neighbor 192.168.255.4 route-map deny-all in neighbor 192.168.255.4 route-map tag_transit_eids out neighbor 192.168.255.8 activate neighbor 192.168.255.8 send-community both neighbor 192.168.255.8 route-map deny-all in neighbor 192.168.255.8 route-map tag_transit_eids out exit-address-family ! address-family ipv4 vrf CAMPUS redistribute lisp metric 10 exit-address-family

Step 270.

Display the routes known to Transit Control Plane TransitControlPlane# show bgp vpnv4 unicast all summary BGP router identifier 192.168.255.9, local AS number 65540 BGP table version is 17, main routing table version 17 3 network entries using 768 bytes of memory 3 path entries using 408 bytes of memory 1/1 BGP path/bestpath attribute entries using 296 bytes of memory 1 BGP extended community entries using 24 bytes of memory 0 BGP route-map cache entries using 0 bytes of memory 0 BGP filter-list cache entries using 0 bytes of memory

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 162 BGP using 1496 total bytes of memory BGP activity 3/0 prefixes, 6/3 paths, scan interval 60 secs Neighbor V 192.168.255.4 4 192.168.255.8 4

Step 271.

AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 65222 6373 6386 17 0 0 4d00h 0 65004 4961 4971 17 0 0 3d03h 0

Verifying the LISP instances of the User groups and the Shared Services formed on Transit Control Plane TransitControlPlane# show lisp site eid-table vrf CAMPUS LISP Site Registration Information * = Some locators are down or unreachable # = Some registrations are sourced by reliable transport Site Name Last Up Who Last Inst EID Prefix Register Registered ID site_uci never no -4099 0.0.0.0/0 1d17h yes# 192.168.255.4:28794 4099 172.16.101.0/24 1d17h yes# 192.168.255.8:32676 4099 172.16.200.0/24 00:18:32 yes# 192.168.255.8:32676 4099 198.18.133.0/24

This completes Exercise 14 © 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 163

Exercise 15: Testing the Inter-Fabric Sites connectivity To verify the wireless host connectivity, you will use the Apache Gucamole (http://192.168.100.100:8080/guacamole/#/) to launch consoles for the wireless host VM. Step 272.

From the jump host use the web Brower and click on the bookmarked (PC VMs) gucamole link to connect to your Wireless host.

Step 273.

Open console windows to access the PC-01 (connected to Fabric SITE 1) and PC-03 (connected to Fabric SITE 2) VM by selecting it and right clicking.

Note:

Once you right click on PC-01 and open in new tab and similarly PC-03, you will see both the tabs are reflecting as PC01 in the url space (as shown in snippet below), however these two are different windows machines, confirm with MAC addresses.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 164

Step 274.

On PC-01, click on the Network Adapter Setting and confirm the assigned IP address should be from the defined Production IP Pool i.e . 172.16.101.0/24 Also 802.1X authentication is enabled. Under the Authentication tab, click the Additional Settings and Add the User Credentials as: Username: Fred Password: DNAisC00L (“00” are zeros)

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 165

Step 275.

On PC-01, Open the command prompt and verify the reachability to the Gateway IP Address

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 166

Step 276.

On PC-03, click on the Network Adapter Setting and confirm the assigned IP address should be from the defined Staff IP Pool i.e. 172.16.200.0/24 with Default Gateway 172.16.200.1 and DNS Server: 198.18.133.30 Also 802.1X authentication is enabled. Under the Authentication tab, click the Additional Settings and Add the User Credentials as: Username: Emily Password: DNAisC00L (“00” are zeros)

Step 277.

On PC-02, Open the command prompt and verify the reachability to the Gateway IP Address

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 167

Note: The IP addresses on the Windows machine could be the different than the ones used while writing the lab guide. The only consideration is to have the IP addresses from the correct defined Pools (during the lab.)

Step 278.

Now in order to test the connectivity between Fabric SITE 1 and Fabric SITE 2, we will ping PC 01 (172.16.101.230) from PC 03 (172.16.200.210) and vice versa.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 168

This completes Exercise 15

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering

P a g e | 169

This completes the DNA Center 1.2.10 SD-Access Multi-Site Lab.

© 2019 Cisco Systems Inc.

Solutions Readiness Engineering