Business Cloud News
Deutsche Telekom is looking to transform itself into a “software defined operator”

Deutsche Telekom is looking to transform itself into a “software defined operator”

Operator group Deutsche Telekom is looking to transform itself into a “software defined operator”, the firm’s VP for aggregation, transport, IP and fixed access, Axel Clauberg told Broadband World Forum delegates Thursday. The telco says bringing in skills currently being mobilised by cloud heavyweights like Facebook and Google will be essential.

With Software Defined Networking (SDN), virtualisation and cloud dominating much of the discussion at the event, many operators see abstraction of the software layer and virtualisation of hardware as key to managing the anticipated surge in data.

“Networks today are not ready to support the demand we will see in the future, so we need to invest to support this growth going forward,” said Clauberg. However, operators carry complexity in their networks due to the legacy systems in place and it is much more difficult to switch off old technology than it is to introduce new technology.

“We need a drastic change on how we are running this,” he added. “We need a moonshot programme to master this challenge and we need visionary people with bright ideas.”

Redefining Deutsche Telekom as a software defined operator initially requires simplification as networks today carry many protocols and Clauberg questioned whether support for IPv4, MPLS and IPv6 among others is really necessary.

“What is needed to run a network in the future? Not for what you would do today, but what would you do in 2018 and beyond?” he asked. “How many layers do we need, do we need a separate optical transport layer? If it’s all IP, we should optimise around IP.”
He said that going forward, IPv6 will be the only address protocol in the core, whereas IPv4 will be supported as a service in the network and the core will be simplified through reliance on fewer protocols for tunnelling, DHCP, 100 GE and IPv6.

“The biggest pain for us is that there is so much legacy technology in networks that it is difficult to bring new services to the market. We need to be able to program new services without rearchitecting the network,” he said.

This is something of a sore point for operators, which have seen over the top players exploit their networks without bearing any of the investment costs.

“We have been structured in silos typically,” he said. “But internet companies such as Facebook and Google don’t have silos, they have people who have programming, engineering and operational skills. So we also have to bring new skill sets into our organisation and staff teams in a different way.”

This story originally appeared on our sister site,

  • Michael Bushong (@mbushong) October 25, 2013 at 3:41 pm

    I think the moonshot analogy is a good way to think about it. The changes we are talking about will be significant, and they go well beyond just changing up the technologies running in the network.

    In a software-defined world optimized for applications, the metrics are not the same. It is more about experience than mere uptime and connectivity. How does this impact MBOs and reporting?

    In a highly orchestrated environment where infrastructure touches other infrastructure, what does a purchasing decision look like? A networking purchase no longer only touches networking. The compute and storage guys might be impacted. How does this change evaluation? Purchasing? Budgeting? Testing? Deployment?

    And who creates the software glue that makes it all work? This is a new job. Where does it reside? Who staffs it?

    The changes will be as much organizational as they are technical. Companies will need to plan accordingly. If you view this as a technology shift, you will likely find that your transition fails at least once. It is not clear to me that companies will be able to afford to fail on this transition.

    -Mike Bushong (@mbushong)

  • Post a comment

    Threaded commenting powered by interconnect/it code.