More
    spot_img
    HomeInsightsWhat is Bytemobile's T3100 — and why should operators and competitors take...

    What is Bytemobile’s T3100 — and why should operators and competitors take notice?

    -

    Bytemobile has launched a new product line, the T3100 Adaptive Traffic Manager, which it is marketing as an integrated solution to enable operators to gain access to real time intelligence on network conditions and user experiences, and take traffic management decisions accordingly, and flexibly.

    You can read Bytemobile’s release here:
    http://www.bytemobile.com/news-events/2011/archive_300811.html

    By collapsing DPI, load balancing, caching and optimisation capabilities into one element, Bytemobile says it is creating a new class of network component. The idea is that operators have a way to react to and ensure the user experience as data traffic volume increase in next generation mobile networks. By doing so, they could also create new monetisation opportunites. It sees 4G/LTE as the inflection point for the introduction of these solutions, and says that the need for traffic management solutions has caught the big equipment manufacturers on the hop. As the likes of Cisco and NSN play catch-up, Bytemobile, by virtue of its existing role in the data plan, claims it can help operators address these issues today.

    Keith Dyer spoke to Ronny Haraldsvik, Vice President, Global Marketing, and Jeff Sanderson, Senior Director of Product Marketing (Adaptive Traffic Management), Bytemobile. They explained the thinking behind the design and development of the T3100 series, what operator needs it meets, and how they expect the competition to react.

    What is the T3100 about, how is it different from your current Unison optimisation approach?

    Jeff Sanderson:
    We are taking what we do today and making sure that we are moving forward with the evolution of mobile networks — to try and manage the capacity crunch by creating a more holistic approach to the traffic management problem.

    Traffic management for operators tends to be a pretty fragmented approach today.

    They’re doing some specific DPI at certain points in the network, either standalone or integrated into components like the GGSN, and they create an enforcement point for being able to control the logical pipe a subscriber can get access to in terms of throughput and so forth. We typically sit behind that and optimise the content.

    Where we differentiate with Unison is that we inspect the content, and that allows us to understand and manipulate the content, largely to deliver a better user experience. The benefits of that, and the main focus of the value proposition, is the amount of data it reduces in the downstream network, creating a less congested network where more users can get on and enjoy the data services.

    Today the majority of that focuses on progressive downloads, short clips, YouTube and user generated content, typically of a lower quality because that’s what the 3G networks can sustain today. As you evolve the network, then clearly that trends towards more studio quality, long play, TV programmes and movies that can be watched over the mobile internet. As networks start to get faster, users start to increase the level of quality that they are watching, and also the length they are watching. For really efficient high-speed networks you start to see the viewing patterns trend against the normal TV viewing patterns in the evenings.

    Ronny Haraldsvik:
    We took on being able to fix video and make that work over mobile networks, but by doing that we took on the most difficult problem that exists, beyond looking at the first few packets we looked at the whole content and we shaped the content to make it better. Doing that allows us to take a more important role in the network. By leveraging the insight of being able to see everything that’s going on, we can bring that together into one platform that no-one else can do. So sitting at that Layer6-7 level gives us a unique advantage.

    It’s a lot easier to go down the OSI stack than it is to go back up again, so we have a unique situation where the DPI people and the GGSN guys are at L2-3-4-5, but they do very little with the content. They can see  what the packet is and where it’s headed and that’s it, they don’t follow the flow, etc. They’re trying to go up the stack now, and looking at “how do we do optimisation, what else can we do?” But rather than inserting more network elements into the architecture we’re collapsing them and bringing them together.

    What are the benefits of that collapsed, integrated approach?

    Jeff Sanderson:

    What we don’t do today in Unison is extend our field of vision to look at all the applications; so as a subscriber on your laptop connected to the mobile internet you probably have other multiple applications open at the same time as you’re watching a video. We’re just focusing on the video and looking at traffic conditions and if a video is likely to stall we start to preemptively deploy techniques to avoid that. What we don’t know today is what else you’re doing that may be impacting that. Do you have Outlook look open, are you downloading a big attachment while trying to watch a video at the same time? So by opening up our field of vision we can make assessments based on network conditions plus all the other applications you are using. So we can start to make wiser judgements on how we can deliver a better UE to you.

    So for example we could put some level of restriction on downloading an attachment to your PC, to give a bit of a boost to a real time application such as video. The way we do it is to monitor real time analytics on the platform, track a user and get a view in real time of how much stalling there is, how quick a page is downloading in reference to other benchmarks, and come up with an index.  Based on that running index we then apply traffic management policy in a real adaptive fashion.

    The way that differs to DPI today is that DPI looks at the first few packets to classify the application flow and apply a policy that exists for the lifetime of that flow. Now the network changes very quickly, and maybe that’s OK for a short period of video but as you gravitate towards a longer run video, you’ve lost sight of that control at the flow establishment point, and you don’t know what’s happening an hour later. The network could have changed but you can’t go back and change your policy dynamically on these existing policy platforms.

    Because we control each and every packet of each and every flow we are able to do it adaptively during the session.

    With the T3100 platform we look at all apps and make better judgements on how to apply traffic management techniques to drive a better user experience. Obviously there’s a class of apps that really contribute to how a user sees his experience. If you are getting lot of stalling on a video you are going to notice that. So we use that to trigger a lot of adaptive policy in the platform.

    What we do today is very CPU intensive, and if we can do that more sparingly and only apply techniques when a user gets the benefit from it, you avoid burning up CPU in a network box that the operator has to spend money on, and you can drive the user experience in a more positive direction.

    Where in a network will operators apply these adaptive traffic management measures, and what applications will traffic management be put to?

    Jeff Sanderson:

     

    Traffic management today happens on the Gi interface, and as we evolve to 4G and other technologies, we will see it get distributed deeper in radio and other associated access networks. Traffic management will exist on every element along the path in evolved mobile networks, and how operators implement that in each of those discrete elements will be how they differentiate themselves. Each of those components have different limitations. For example the GGSN is capable of the basic traffic management that standalone DPI vendors are doing today. As GGSN vendors evolve they are able to do what standalone DPI vendors do today, and we are happy for them to do that. What we do is far more adaptive and drives a far superior user experience for an operators’ subs.


    Ronny Haraldsvik:

    Putting the user at centre of profiles also develops new monetisation opportunities because operators know what the network conditions are, what the user condition is, and can tie that to user profiles to be more proactive in the services that they are offering to subscribers in the network.

    You can take this to the point of knowing that a user has had a bad experience, due to an overload in the RAN or backhaul, and proactively ping that user so by the time he comes home a discount is waiting for him, allied to his upcoming bill.

    We are in a sense not doing anything new, we are just bringing this together in one element, reducing TCO by up to 50% in doing so.

    It’s not a standard element, but then again optimisation is not either, nor is the way operators insert DPI or Load Balancing, but they’re in the network today, and what operators have asked us to do is bring it together. If by doing so you can get more intelligence as to what’s going on, then the next step is to move it closer to the edge, add edge caching capabilities there too and store content closer to the edge, it doesn’t stop with traffic management…

    Jeff Sanderson
    The benefit of what we do from an operator perspective is that they have a far better paradigm of being able to tariff. This is a contentious area. Today their segmentation paradigms are pretty basic, and mostly volume based. Moving forward they could break out of that model. The key thing in breaking out is the ability to break out, and the second thing is what is the tipping point, and we think the evolution of networks to 4G is the point at which they will be able to leverage these capabilities.

    We have seen a number of vendors putting together a policy-optimisation-traffic management portfolio, whether that’s coming at it through acquisition, like Amdocs and Bridgewater, or through partnership as with Vantrixx and Ericsson, or Juniper and Openwave.  What impact will this integrated approach have on those initiatives.

    Jeff Sanderson:
    Where does it play with policy and so forth, all the things that drive this dynamic self organising approach for traffic management control? Operators are finding their feet with policy and understanding what it can and can’t do. Our view is, and I think it’s probably shared by Cisco and Juniper to a degree, is that if you’re sat in the data plane watching this traffic you’re probably best placed to make real time decisions on how to achieve this approach.

    The way we talk about that is in terms of a detect, decide, react loop. There are competing architectures that take that in a long loop – using things like network probes to detect conditions in the RAN, then send that data to the PCRF which decides and implements policy on a network element like DPI or GGSN. That’s a pretty long loop. By the time the condition has flared up and been applied in terms of policy in the network element it’s probably gone. So we’re trying to collapse that loop directly into the network element, and you can only do that if you’re adaptive. If you lose sight of a video flow once you’ve classified it in DPI and you don’t know what ‘s happening down the road, you rely on external intelligence to tell you what to change. We do that natively within the platform.

    Ronny Haraldvisk:

    If you take a look at everything at MWC 2011, there were a myriad of announcements about traffic management or optimisation activities; that was the theme. The shift in traffic, where it’s headed, over the past 12 months completely took NEPs by surprise. They have standard elements, yet lo and behold these non-standard elements are now taking control of traffic, and helping operators mitigate the effects of this onslaught of traffic. No-one has this fully – Cisco has DPI and they are saying anything can be done on the GGSN. We know better and operators know better; they can’t just put everything on a Cisco 5000 or 9000 box and wish it’s been taken care of. That’s why we see these announcements or relationships like Vantrix/Ericsson, Juniper and Openwave etc They talk openly about the fact that it’s more of a latter half of 2012 solution in terms of commercial revenue recognition. That’s more or less trying to cobble together the GGSN with some optimisation and portfolio stuff. It’s not taking a holistic view of traffic management, it’s a reaction.

    Do we believe there will be more vendors coming into this space, or a repurposing of the M&A activities of the DPI guys getting together, or F5 and load balancing, or the Amdocs acquisition? This all related to what’s going on in the network, where the traffic pattern has shifted and operators need more intelligence and insight going forward. I would not be surprised if a month from now another adaptive traffic management solution is announced, because it’s a natural thing to happen.