Purchased wifi solution for NANOG to own based on Xirrus, based on RFP process.
1,700 unique devices connected to wifi this session
Xirrus has been on site the whole time trying to find ways to utilize spectrum. Had bad cable drops, bad hardware, bad patch cables.
(Wifi has sucked repeatedly. Lots of speed shifts, lots of stalled sessions. Have toggle macbook wifi off/on several times in an attempt to get a working connection)
NANOG is not blaming Xirrus and expects great things in the future. Expect no "issues" for next meeting.
Next meeting ought to be amusing...
NANOG 64 Notes
NANOG 64, June 1-3, 2015, San Francisco -Charles Spurgeon
Wednesday, June 3, 2015
Evolution of Ethernet Speeds - What’s New and What’s Next
Evolution of Ethernet Speeds - What’s New and What’s NextExpect 802.11ax to go four times faster than 802.11ac.
- See more at: https://www.nanog.org/meetings/abstract?id=2576#sthash.nANbxPbO.dpuf
In this presentations we'll talk about the latest Ethernet developments that are bringing a variety of new technology to the market for different applications with speeds ranging from 2.5 GE to 400 GE. We'll take a look at the new 2.5 GE, 5 GE and 25 GE speeds, 2nd generation 40 GE and 100 GE, 400 GE and what's possible in the future.
"If we go for the more conservative 4x estimate, and assume a massive 160MHz channel, the maximum speed of a single 802.11ax stream will be around 3.5Gbps (compared with 866Mbps for a single 802.11ac stream). Multiply that out to a 4×4 MIMO network and you get a total capacity of 14Gbps." http://www.extremetech.com/computing/184685-what-is-802-11ax-wifi-and-do-you-really-need-a-10gbps-connection-to-your-laptopNBASET - "everyone except Broadcom"
For 25Gbps there is an SFP28 - same size as 10Gig.
40Gbps - done and in good shape. Popular in DCs with breakout cables. There is now a 40km SM interface.
100Gbps - in 2nd generation. 1M 100GE ports projected in 2016. In "early majority" phase of market adoption. (OSI 100Gbps transceivers are down to $20K currently.) QSFP28 down to 3.5W for 100Gbps transceiver. Currently four different vendor MSAs to do short reach (2km or 500m) for 100Gbps. Market will sort it out. 100Gbps signaling is still just on/off signaling.
400Gbps uses complex modulation - can't blink the light on/off fast enough with current electronics. 802.3bs task force to develop interfaces for 400Gbps. 400GBASE-SR16 - 16 x 25Gbps over parallel MMF. strong desire to support 10km.
Expect 100Gbps standard by 2017, first interfaces some time after that. Initial modules CDFP for short reach. CFP2 (old 100Gbps module) which has 8x50Gbps electrical interface.
Some time in 2020+ expect serial signaling at 400 Gbps, which will make terabit possible by combining multiple 400 Gbps serial flows.
Could see some new standards around 50 Gbps signaling as a result of work around 25Gbps.
"We are still a ways away from 400 Gbps serial signaling, but will get there eventually."
"At higher speeds and longer distances (beyond the standard) it gets into optical company secret sauce and you will have to deal with the vendors for 40 and 80km."
Q: what about MTU negotiation in auto-negotiation? A: no interest in IEEE to define maximum frame size due to installed base. Customers could insist on it if they want. (Note that standard Ethernet max frame size is now 2k, which holds 1500B of data and 482B of vlan ID/tags/labels/whatever)
New Cybersecurity Obligations and CPNI Rules Represent Regulatory Sea-Change for Network Operators
New Cybersecurity Obligations and CPNI Rules Represent Regulatory Sea-Change for Network Operators
- See more at: https://www.nanog.org/meetings/abstract?id=2590#sthash.JrR4drGe.dpuf
The early months of 2015 have seen an unprecedented level of action in the realm of U.S. cybersecurity policy. The Obama Administration, in response to a growing number of cybersecurity compromises and data breaches has announced an aggressive cybersecruity and data security agenda. The activity appears motivated at lease in part by the spike in the number of U.S. data breaches in 2014.Major push by Obama admin since got into Whitehouse, with lots of activity in 2015.
The Cybersecurity agenda of the Obama Administration will have direct and indirect consequences for network operators potentially adding to their already substantial regulatory burdens. On top of these new obligations, the FCC adopted new open Internet rules that for the first regulate how data Internet service providers can utilize Consumer Proprietary Network Information (CPNI). That's a big deal to network operators who supplement revenue from user fees in a variety of ways by trafficking in user data. This presentation will educate the audience on the new rules and outline compliance strategies.
Lots of concern about sharing information about breaches due to liability, publicity, etc.
Obama pushing BCP, but so far voluntary.
No privacy law in US for citizens, instead this is done industry by industry, controlled by FTC to ensure that companies are meeting their published privacy statements. Kludgy and incoherent.
Need baseline protections for consumers, also being pushed by Obama.
Q: Why do you believe that FCC open access will not be struck down? A: Decision by court reviewing FCC rules was extremely clear and listed elements so well that difficult to challenge. (previous speaker was also of that opinion). Q. Agree that re-classification into common carriage is right thing to do, but is it defensible this time because it was done properly this time? A: The DC Circuit addresses most agency appeals, next step is Supreme Court. DC Circuit listed everything that FCC needed to do to meet legal requirements for common carriage rules and FCC met all of those requirements this time. So feeling is that it will withstand Supreme Court and even unlikely for challenge to get to that level.
Overview of DDoS types
Review of DDoS types being seen today.Perception is that DDoS is some magical event and that you cannot anticipate or deal with. But that is not true and there are approaches you should take for mitigation. Present mitigation approaches in cost/benefit matrix to mgmt so that they can decide what SLA they need/want and how much they are willing to spend.
This talk covers the principles and particular implementations of DDoS. It goes in detail as to what are the bottlenecks that are generally exploited/overloaded, the attack types and the solutions to those. - See more at: https://www.nanog.org/meetings/abstract?id=2584#sthash.cxHtfoNO.dpuf
Architecture for fine-grain, high-resolution Telemetry for network elements
Architecture for fine-grain, high-resolution Telemetry for network elements. - See more at: https://www.nanog.org/meetings/abstract?id=2574#sthash.iWOQO3RG.dpufYet another discussion about how broken network monitoring and management is. Maybe it's time for NANOG to organize a user revolt against vendors. Come up with a list of what customers need and tell vendors that they need to meet the new requirements. How about config file management with commit testing and rollback Cisco?
The Networks evolve quickly to be highly automated, self-adapting, and intelligent integrated systems. However, even the most intelligent system can take as good decision as good input information it is provided with..
In this session we address an architecture that enables high frequency export of telemetry data from network elements. Traditional protocols like SNMP retrieve data from network elements using a “pull” model, which suffers from several well documented shortcomings, the most important being a centralized architecture which causes strain on the central processor of the network element, taking away cycles from the main functions of the router.
This architecture addresses the problem by creating a distributed export mechanism where telemetry is “pushed” out directly from the source, rather than relying on the central component. This is intended to enable innovative applications, such as dynamic provisioning of devices based on utilization levels, security and/or quality prediction base don anomaly heuristic analysis, etc. The architecture defines the following
1. The Open telemetry model for a network element.
- Configuration and Provisioning
- Capability discovery
2. Implementation of telemetry probes in various internal sub systems of a network element.
3. Open-source based Export mechanisms via which telemetry data is generated
SDN in the Management Plane: OpenConfig and Streaming Telemetry
SDN in the Management Plane: OpenConfig and Streaming Telemetry
See more at: https://www.nanog.org/meetings/abstract?id=2573#sthash.y5xYlSmA.dpuf
The networking industry has made good progress in the last few years on developing programmable interfaces and protocols for the control plane to enable a more dynamic and efficient infrastructure. Despite this progress, some parts of networking risk being left behind, most notably network management and configuration. The state-of-the-art in network management remains relegated to proprietary device interfaces (e.g., CLIs), imperative, incremental configuration, and lack of meaningful abstractions.
We propose a framework for network configuration guided by software-defined networking principles, with a focus on developing common models of network devices, and common languages to describe network structure and policies. We also propose a publish/subscribe framework for next generation network telemetry, focused on streaming structured data from network elements themselves.All the usual problems: CLIs are proprietary, scaling sucks, screen-scrape sucks, etc.
However, these are SDN shops, so they can change stuff in software and stop waiting for vendors to never fix things. New world order. But wait! Didn't they hear that Cisco is a software oriented company?!
Propose: "model-driven network management"
1. topology
2. configuration
3. telemetry
Telemetry:
SNMP default choice. Old protocol, choices were made to conserve limited resources of the time.
Asking for requirements on a new telemetry protocol to replace SNMP.
gRPC, Thrift, or transport buffers over UDP (sounds like SNMP)
streaming telemetry is the goal, ask you vendors
Config:
OpenConfig effort - informal industry collaboration (operators, not vendors). Motivated by lack of abstractions and programmability, all the usual litany of complaints people have been repeating for years and vendors have not been addressing.
Focus on vendor-neutral configuraiton and operational state models - adopted YANG DML (rfc 6020)
Weekly meetings, github repository
No legal structure to this group, nothing formal, trying to avoid that overhead (and political layer)
Their approach is operations specific, don't want to care about standards politics
Looks like the IETF and other groups are now being ignored when people need to get work done. Yet another new world order.
"It's time for the management plane to join the age of SDN"
They do not intend to play game according to vendor's rules. "The fact that there is a different set of commands for BGP across vendors and devices is ridiculous and there is no reason for it that we can see."
Q: Why don't vendors adopt approaches used by server admins, puppet, chef, etc. Why not use the same technology? A. tools for server management may not apply across the board to more complex pieces of network gear with complex configs. However, the models are key, and models can use JSON encoding and with puppet/chef.
Q. Why push this in this way? There are a number of repositories for YANG stuff. Why is what you are doing is different? A. we are publishing models into some of the same repositories, We are also working with IETF, however this is consumer view and operators view so our perspective is different.
Q. are you positing to say IETF is not effective forum for operators to work with? A. we are trying hard to work with ietf and it hasn't been easy.
Q. how can we avoid doing MIBs all over again in new formats? A. operator-based view, pushing vendors to support a base model, trying to keep lean. vendors should be able to add their own stuff, but should be programmable unlike past. Q. when dealing with vendors if you give them an inch and they will take a mile. A. we are trying to make base model as complete as possible, so challenge to vendor is what do you need that isn't in base?
Q. Igor response - IETF moves very slowly. IETF structured into very different hierarchies. Creates one guidance for structure for every hierarchy. "make everything look the same" and so we are avoiding that, and we will be feeding this back to ietf with information drafts. Basic answer: ietf is too slow and their org is not helpful.
Source Routing 2.0. Why Now, Why Again?
Source Routing 2.0. Why Now, Why Again?
Traditional source routing using IP header options was never widely deployed due to security concerns. Recent buzz around Segment Routing (a.k.a. SPRING) has re-invigorated interest in source routing technologies and their potential benefits. For many operators however, moving to SPRING represents a significant change in their operating practices, so in a more incremental approach they are implementing SPRING-inspired designs using current technologies with minor augmentations.
In this talk we will review SPRING/SR, but will mainly focus on using existing protocols for achieving similar benefits. We will discuss: - clever usage of static LSPs to achieve predictable label values in a data-center network - minor enhancements to BGP-LU for more resilient EPE (Egress Peer Engineering) - interoperability considerations between SPRING and non-SPRING domains
See more at: https://www.nanog.org/meetings/abstract?id=2585#sthash.kH6HJqky.dpufTraditional approach, putting source routed headers in packets. All deprecated in both v4 and v6. RFC5095 pretty much bans routing headers in packets.
If put into tunnel and source route the tunnel then people find that acceptable: MPLS TE with EROs.
New interest in "segment routing" to tunnel packets from src to dst by describing route in the header as a sequence of "segments." Arises out of SDN / controller world view.
Appears to assume a closed universe whose endpoints are owned/operated by same entity. Example: VXLAN tunneling in DC environment.
Google created a mesh of RSVP LSPs that cover all paths, creating a lot of extra state in their network. Then they were able to analyze perf across all LSPs to monitor state of large complex mesh inter and intra Google DCs. This approach adopted by other big providers using static LSPs to avoid RSVP state overhead. Then use SPRING/source routing to push packets across specific paths for monitoring and analysis.
Could also use across MPLS paths in DC. He seems to think that using MPLS in the DC is controversial. Presumably because VMware thinks they own the space with VXLAN?
Subscribe to:
Comments (Atom)