Kolmapäev, 01 Mai, 2024
Asute siin: Esileht Projektijuhtimine Project managment

CIO
  • Reduce your network complexity with AI

    As networks continue to grow in complexity and become more prone to damaging attacks, overstretched IT teams need more time, effort, and expertise than ever before to keep them up-and-running and meet user needs. It is becoming increasingly challenging to prevent service degradation, remediate attacks, and deliver consistent, high-quality digital experiences to a hybrid workforce. That’s especially true as organizations face a shortage of skilled networking talent.

    To simplify and streamline network operations and reduce the burden on IT teams, many decision makers are considering network operations solutions that are AI-enabled. These solutions can help IT teams make better decisions faster, reduce costs, and optimize network resources.

    In fact, there’s a growing sense of urgency among business leaders to accelerate the deployment of AI across their operations. In a recent global Cisco survey of more than 8,000 business leaders, 61% said they believe they have one year at most to deploy their AI strategy before they’ll begin to feel negative impacts on the business, such as reduced operational efficiency and the inabilities to meet customer expectations and attract top talent.

    When applied to complex network operations and security, AI provides many advantages, including helping IT staff make quick, informed decisions and automating many labor-intensive and error-prone configuration and management activities. The adoption of AI technologies provides a set of significant operational improvements in several key areas.

    1. Empower IT teams

    AI has a vital role to play in reducing the operational burden on IT teams by empowering them to manage network complexity with much less effort. AI-enabled tools are designed to optimize the network through automation and do much of the heavy lifting to ensure that everything is running smoothly and efficiently.

    An AI-enabled network operations solution can automatically collect and correlate data in the network from end to end with no human intervention. This data can be used to derive insights through analytics that help: detect anomalies such as configuration errors; provide guidance to IT staff for remediation; and predict and resolve potential performance issues before they affect users. In addition, AI frees up highly skilled IT staff to focus their attention on other high-priority activities. Research has demonstrated that the adoption of AI tools resulted in IT staff spending twice as much time focused on innovation.

    • Strengthen end-to-end assurance

    End-to-end assurance—the holistic management of network performance, reliability, and security—is another key area where the application of AI technology can make life simpler for IT teams. Whether they’re in the office or working remotely, users expect reliable, fast, consistent, and protected experiences. Providing these experiences means IT teams need enhanced visibility across the entire network. AI-enabled tools give IT staff real-time visibility into network connectivity from end to end. They also provide a common network management language for assuring continuity and optimizing the quality of the digital experience for workers, no matter where they’re located.

    • Unlock network value

    AI can also help businesses scale their network performance and operations in a variety of different ways. AI tools can streamline real-time monitoring, management, and threat detection. They can also automate network configuration and workflow orchestration as new applications and use cases come online. These include precise forecasting of demand growth, simplified compliance, pervasive real-time video communications, and augmented reality. Overall, the application of AI tools can expand the range of possibilities the network can support.​ 

    • Improve threat detection

    Highly sophisticated and destructive cyber threats are on the rise. Nearly two-thirds of the 4,700 security professionals recently surveyed by Cisco reported suffering major security incidents that jeopardized business operations.

    AI-enabled security solutions can be a powerful defense in the fight against cyberattacks. These solutions enable security teams to continuously monitor network traffic so that any anomalies, intrusions, or vulnerabilities can be detected and neutralized before they can damage the network.

    Find the path for your AI network transformation

    The acceleration of AI technologies is creating new growth drivers for organizations in every industry and improving productivity and business efficiency. The application of these technologies in the network can dramatically simplify operations and provide users with great digital experiences, while also giving organizations a significant edge over the competition.

    Cisco is uniquely positioned to help customers leverage AI-enabled networks to simplify operations and enhance security. The company brings a depth and breadth of IT expertise gained from managing more than 50 million network devices annually, and can empower IT teams to do more in less time while delivering consistent and protected network experiences. 

    Start now and explore how Cisco networking platforms and solutions can help you implement AI-enabled network operations in your organization.

    Networking


  • Internet startup launches while embracing adoption

    When Brightspeed launched in 2022, the fiber broadband internet service had ambitious goals, acquiring a service territory encompassing more than 6.5 million locations in mainly rural and suburban communities in 20 U.S. states.

    As the Charlotte, North Carolina-based company planned its fiber build plan across its footprint, the company’s IT specialists realized that, with artificial intelligence (AI) emerging as the consummate transformative technology, Brightspeed needed to embrace adoption or fall behind.

    In particular, the company had to integrate billing data from SAP S/4HANA, an enterprise resource planning software designed specifically for large enterprises, with SAP Billing and Revenue Innovation Management (BRIM) and replicate the information to Google BigQuery, a fully managed, AI-ready data analytics platform.

    Once the system was up and running, Brightspeed would have the ability to operate in multiple formats and multiple engines in real time via the cloud. However, the transition would be challenging.

    With millions of records to filter, comprising several terabytes (TB) of data, the IT team was hampered with processes that were manual and repetitive. The poor data quality distorted findings and compromised real-time decisions, as well as created unpredictable scenarios that had to be remedied one at a time.

    With plans to reach more than three million new fiber network passings – locations where the company connects fiber to homes and businesses – change was essential and could only be achieved by implementing a seamless adoption strategy.

    Harmonizing data

    The project to adopt the existing technologies and replicate the data on Google BigQuery launched with the inherent knowledge that it would be impossible to successfully manage voluminous, multiple gigabyte (GB) transactions daily without first achieving data harmonization.

    It required Brightspeed to provide information on revenue, payments, receivables, and payables using accurate, real-time numbers to support recognized revenue, revenue reported, deferred revenue and payment summaries; resolve disputes; and help with dunning – the process of regularly reminding customers that a particular bill was due.

    This would require data semantics, an understanding of the components in a database and their relation to each other, as well as data fabrics, the architecture facilitating end-to-end integration of the various data pipelines and cloud environments.

    The world’s leading enterprise resource planning (ERP) software vendor, SAP, was selected to spearhead the undertaking for a variety of reasons, including its SAP Datasphere solution, renowned for converting businesses to data-driven enterprises, and the existing partnership with Google Cloud.  

    Once the project was completed, Google BigQuery users would have the ability to ingest real-time billing and finance information from SAP BRIM, which specializes in businesses with high volumes of customers, subscriptions, and pay-per-use transactions, as well as established ERP software SAP S/4HANA.

    In addition to the range of enterprise-grade data and analytics solutions available from SAP, the multinational’s management and solutions (M&S) teams have taken an active role in supporting customers with adoption and consumption.

    Making the most of each digital opportunity

    The integration, completed in January 2024, provided Brightspeed with a new cloud-based replication tool automating integration into Google BigQuery, eliminating manual processes and repetitive tasks.

    The real-time analytics now available to the internet service provider allow in-the-moment decision-making in areas that had been challenging in the past, increasing data load replication three times.

    The elimination of the manual system has also enabled Brightspeed to save 2,000 work hours each year, which is time that can be utilized to drive faster innovation.

    Also, it increased the accuracy in the recognition of revenue, payment issues, and disputes by 76 percent.

    Rather than end-of-day payment reports, workers are able to receive updates either in real-time or hourly.

    This has saved Brightspeed approximately 80 hours of work each week.

    Having remedied a pressing IT challenge, Brightspeed now has the platform necessary to empower the company’s transformation into a truly intelligent enterprise.

    Brightspeed was named a winner in the “Transformation Titan” category at the SAP Innovation Awards 2024, an annual ceremony recognizing organizations using SAP technologies to make a difference.

    By continuing to utilize these and other readily accessible tools, the company can more easily introduce more digital opportunities and use them to their full potential. You can read more about what Brightspeed did to earn this coveted award in their pitch deck.

    Digital Transformation


  • Ingesan embraces a new way to approach HRM with AI

    Ingesan, a subsidiary of the OHLA Infrastructure Group, the Madrid-based construction and concession management multinational, has launched the Empath-IA project, a joint initiative between HR management and the digital transformation division that aims to address the increasingly competitive demands and complexities in HRM. And at its center is Nuria Fuentes, Ingesan’s CIO and leader of systems and digital transformation. 

    Empath-IA also rose out of a need to strategically respond to increasing competition, so the project could serve as a catalyst to enable a commanding position in the services sector.

    “We detected the need to modernize our processes, taking advantage of the capabilities offered by AI to improve efficiency, precision, and personalization in our operations,” she says. With Empath-IA, she and her team not only seek to continuously improve internal procedures, but also enhance the experience of both employees and their customers.

    There were, unsurprisingly, some initial wrinkles to iron out post launch last year. “Implementing any project of this magnitude comes with significant challenges,” she says. One of the main ones was ensuring seamless integration of AI technology into its existing systems and processes. “This involved not only the acquisition and implementation of appropriate technologies, but also careful planning and coordination between the HR and digital transformation teams,” she adds.

    The benefits achieved

    Citing the advantages of AI in HR, Fuentes highlights various priorities. First she allows greater efficiency in process management by automating repetitive and administrative tasks, thus freeing up time and resources that can be reallocated to more strategic and value-added activities. Additionally, AI can improve accuracy and consistency in decision-making by analyzing large volumes of data quickly and objectively. Plus, it enables greater personalization in the employee experience by providing recommendations and solutions tailored to the individual needs of team members. “AI transforms the HR function, moving from a reactive approach to a proactive, talent-focused one,” she says.

    As Empath-IA took shape, Ingesan allocated 5% of the innovation budget to implement it and to explore other opportunities — an investment already with measurable ROI. Furthermore, the reception from employees has been positive. “While it’s natural for some concerns and resistance to change to arise, we’ve actively worked to engage our team in the process, and demonstrate the tangible benefits that AI can bring to their daily work.”

    So as they experience improved efficiency, accuracy, and personalization of services, excitement grows about the new opportunities AI offers to improve their performance and professional development. “It’s very important for us, before embarking on the implementation of AI, to identify and understand specific use cases that can benefit from this technology,” she says. “So it’s crucial to carefully select which initiatives offer the greatest potential for return on investment and operational improvement.”

    So as lessons learned are concerned, Fuentes highlights the importance of interdepartmental collaboration, clear communication, and continuous staff training to ensure successful adoption and maximization of AI’s potential throughout the organization.

    Looking back, Fuentes remembers navigating through different stages as part of the project roadmap. “At first we focused on the digitization of fundamental processes, such as digital signatures on documents and the automation of tasks, such as sending payrolls,” she says, a task that not only improves operational efficiency, but contributes to sustainability pursuits by reducing paper consumption. “At the same time, we invest in training and education of our staff in AI, with the aim of developing an organizational culture oriented toward innovation and continuous learning,” she adds. 

    A strategic focus

    The organization also began actively exploring the potential of gen AI to offer value-added services to employees and customers by creating personalized experiences, and automating processes to improve satisfaction and loyalty.

    So using the Empath-IA program as a starting point, Ingesan is prioritizing the application potential of gen AI in the field of occupational risk prevention. This approach is based on recognizing the critical importance of health and safety in any organization, and being able to offer further value-added services.

    “By applying generative AI to this field, we demonstrate our commitment to the safety and well-being of our employees and customers,” she says. “This not only strengthens our reputation as a responsible employer and supplier, but also differentiates us in an increasingly competitive market by offering innovative and future-oriented solutions, which represents a strategic step toward continuous improvement and operational excellence.”

    Artificial Intelligence, CIO, Digital Transformation, Generative AI, Human Resources, IT Leadership, ROI and Metrics


  • Expectations vs. reality: A real-world check on generative AI

    Is generative AI so important that you need to buy customized keyboards or hire a new chief AI officer, or is all the inflated excitement and investment not yet generating much in the way of returns for organizations?

    Gen AI takes us from single-use models of machine learning (ML) to AI tools that promise to be a platform with uses in many areas, but you still need to validate they’re appropriate for the problems you want solved, and that your users know how to use gen AI effectively.

    For every optimistic forecast, there’s a caveat against a rush to launch. Multiple studies suggest high numbers of people regularly use gen AI tools for both personal and work use, with 98% of the Fortune 1000 experimenting with gen AI, according to a recent PageDuty study. But now organizations appear to be taking a more cautious approach when it comes to official deployments.

    For example, a quarter of IT decision-makers in Foundry’s 2023 AI Priorities Study are piloting gen AI technologies, but only 20% have moved on to deployment. Senior leaders in CCS Insight’s Employee Technology and Workplace Transformation Survey gave similar responses: by the end of 2023, 18% had already deployed gen AI to their full workforce, and 22% were ready to deploy. “People want to see it be real this year,” says Bola Rotibi, chief of enterprise research at CCS Insight. But talking to IT teams like the AI professionals in Intel’s 2023 ML Insider survey suggests only 10% of organizations put gen AI solutions into production in 2023.

    Ready to roll 

    It’s shorter to make a list of organizations that haven’t announced their gen AI investments, pilots, and plans, but relatively few are talking about the specifics of any productivity gains or ROI. But that may be as much about protecting any competitive advantage as it is about any lack of success.

    For example, many Google customers, like Goldman Sachs, IHG, and Mercedes Benz, talking about building with its Gemini gen AI tools at the recent Google Cloud Next conference turned out to still be at the pilot stage rather than in deployment.

    Pilots can offer value beyond just experimentation, of course. McKinsey reports that industrial design teams using LLM-powered summaries of user research and AI-generated images for ideation and experimentation sometimes see a reduction upward of 70% in product development cycle times. But it also emphasizes that those design teams need to do significant evaluation and manipulation of gen AI output to come up with a product that’s realistic and can actually be manufactured, and the recommendation is still to set policies, educate employees, and run pilot schemes. Similarly, Estée Lauder sees value from pilots like an internal chatbot trained on customer insights, behavioral research, and market trends to make those analytics more broadly available in the business, but is still working on how to actually deliver that value.

    When it comes to dividing gen AI tools into task and role-specific vertical applications, or more general tools that can be broadly useful to knowledge workers, organizations seem able to adopt the latter more quickly.

    As expected, Microsoft claims its own staff gets significant value from the gen AI tools it has in market, like Copilot for Microsoft 365. “Our best users are saving over 10 hours a month,” says Jared Spataro, CVP, modern work and business applications at Microsoft, and 70% of Copilot users say it makes them more productive, working up to a third faster.

    Customers like Telstra report similar time savings for their early adopters, although Forrester lead analyst on Copilot for Microsoft 365 JP Gownder suggests five hours a month is a more common gain. The other question is how well that will scale across the organization. Large Japanese advertising agency Dentsu, for instance, is very enthusiastic about Copilot for Microsoft 365, claiming staff save up to 30 minutes a day on tasks.

    Adoption of Copilot so far tends to be in what he refers to as pockets, which matches how McKinsey reports that most gen AI deployments are happening in specific departments: marketing and sales, service and support, and product development.

    Telcos surveyed by McKinsey demonstrated the same blend of optimism and restraint as other industries, with a majority claiming to have cut costs with gen AI, and seen increases in call center agent productivity and improvement in marketing conversion rates with personalized content — both with models deployed in weeks rather than months. On the other hand, the impact has been low outside customer service or mapping network infrastructure.

    Organic growth

    Some of Microsoft’s original test customers have already moved from pilot to broad deployment. One of the earliest Microsoft 365 Copilot trials was at global law firm Clifford Chance, and the company is now deploying it to the entire workforce, alongside its custom AI tool, Clifford Chance Assist, built on Azure OpenAI. The company is careful to note that any legal output from gen AI is clearly labelled and checked by a qualified lawyer but, again, the main benefits are productivity gains for knowledge workers: live transcripts, meeting summaries, and both implicit commitments and agreed-on tasks from those meetings.

    “This is an incredible technology that can raise productivity, save time, and be a great human assistant,” says Gownder. “But it’s different from the tools we’ve been releasing over the last 40 years in computing. It has these characteristics you need to learn about to be truly successful.”

    He offers a string of questions to assess the AI quotient of your organization:

    • Do you have a basic understanding of how AI and prompt engineering work?
    • Have you had training?
    • Do you feel confident about being able to learn these things?
    • Are you motivated to get involved?
    • Are you aware of what can go wrong and how you can be an ethical user of these things?

    Another issue is getting staff to make gen AI tools part of their workflow. “Some people are really bullish on Copilot and say they’re having a great experience with it,” adds Gownder. Others find bumps in the road, though, where half of users see productivity gains and the other half doesn’t use the tools. Frequently, that’s because enterprises are underinvesting in training by an order of magnitude.

    Almost every major company evaluating Copilot for Microsoft 365 is only planning on an hour of training for staff instead of the 10 he suggests. “This is a core skill and you need to invest in training here because otherwise it’s going to bite you,” he says. That’s key both for gen AI deployments to succeed, and to get the most out of the gen AI features and natural language interfaces that’ll become common in commercial software, from Photoshop to Zoom.

    Very specific successes

    There are gen AI success stories in verticals like document engineering, where Docugami offers custom small language models that build a knowledge graph from a customer’s own complex documents, and can be used for both document generation and to extract data.

    And commercial insurance is a vertical Docugami CEO Jean Paoli says has been an early adopter, including statements of value, certificates of insurance, as well as policy documents with renewal dates, penalties, and liabilities. That’s critical information describing the risk of both individual customers and the entire portfolio, which has been difficult to manually extract and consolidate to use for generating new quotes, or representing the portfolio to reinsurers. “These are real scenarios that save you millions of dollars, not few hundred bucks,” Paoli says.

    Like everyone else, large Docugami customers created gen AI committees and started pilots in 2023, but many have already moved from discovery to implementation, starting production deployments at least six months ago and seeing real returns, chief business officer Alan Yates says. In life sciences, one customer uses the platform for clinical trial documentation, compliance, and data exploration. “It took them six months to do this work previously and now it takes them a week,” he says.

    Coding is another vertical where adoption of gen AI in production is increasingly common, whether that’s GitHub Copilot, Google’s new Gemini Code Assist, AWS CodeWhisperer, or tools like ChatGPT that aren’t developer specific.

    Productivity improvements can be much lower initially, though. When Cisco first rolled out GitHub Copilot to 6,000 developers, they only accepted the generated code 19% of the time. Now nearly half of code suggestions are accepted. Saving just six minutes of developer time a month is enough to cover the cost, according to Redfin, although there are other metrics like code quality that organizations will want to track as well.

    But the gen AI gains can also be much higher for low code platforms where citizen developers with less expertise get more benefit from the assistance. Digital insurance agency Nsure.com was already using Power Automate extensively, but describing an automation flow in natural language is much faster than even a drag and drop interface. Workflows that took four hours to create and configure take closer to 40 minutes with Copilot for Power Automate, an improvement of over 80%.

    Then there’s Microsoft customer PG&E, which built an IT helpdesk chatbot called Peggy with the low code Copilot Studio gen AI tool in Power Platform that handles 25 to 40% of employee requests, saving over $1.1 million annually, principal program manager for Microsoft Copilot AI Noa Ghersin says. And having Peggy walk employees through unlocking their access to SAP saves the helpdesk team 840 hours a year alone.

    Organizations that have already adopted Power Platform for low code and RPA find they can make that automation more powerful using Copilot Studio to orchestrate processes where there are multiple workflows to choose from, like ticket refunds for Cineplex. Agents used to spend five to 15 minutes processing a refund even with automation, and now that’s 30 to 60 seconds.

    Counting the cost

    Set monthly subscriptions can seem expensive, but it’s hard to accurately estimate costs for on-demand gen AI tools, which may gate some deployments. The costs for individual gen AI tasks can be pennies, but even small costs add up.

    “Cost is a primary thing you have to take into account in gen AI, whether you go to third-party vendors or even internally,” says LinkedIn principal staff software engineer Juan Bottaro. His team recently rolled out a new gen AI feature for premium users that uses your profile to suggest if you’re a good match for a job posting, and what skills or qualifications might improve your chances.

    “There were several times where we would’ve liked to move much faster because we felt the experience was a lot more mature, but we had to wait because we just didn’t have enough capacity and GPUs available,” he says.

    It’s hard to predict costs for novel workflows, and any assumptions you make about usage will probably be wrong because the way that people interact with this is very different, he adds. Instead, deploy to a small percentage of users and extrapolate from their behavior.

    Initially, you may see cost savings because the speed of prototyping is dramatically and almost deceptively fast. Training and testing a classifier to understand intent typically takes one to two months, but his team was able to get prototypes of what they wanted to deliver in just a couple of days. “In a week, you can get something that looks like a finished product,” says Bottaro. “We managed to build something that looks very close to what you see today in the premium experience in a month or two.”

    But getting from something that’s 80% of what you want to the level of quality you need to deploy will often take much longer. In this case, another four months.

    It’s still too early to learn lessons from either technical or cost control failures in gen AI pilots, CCS Insight’s Rotibi says, but users can consider quotas and rate-limiting outbound requests to cloud AI services through API management gateways, just like other cloud services. The majority plan to limit the use of gen AI to targeted roles, individuals, or teams because of the pricing. “That’s a lot of money if you want to go across the organization,” she says.

    What are you measuring?

    Self-reported productivity isn’t necessarily the best way to measure gen AI deployment success, and successful deployments may even change what metrics matter, Gownder says. “If you’re pushing your entire tier-one support off to generative AI and you have really good natural language, the success rate will go up, so everything that gets to a human is a harder problem,” he says. “It’s more long tail and white-glove hand holding, and the metric is more about customer satisfaction than the length of the call.”

    Just measuring the quality and accuracy of gen AI results is difficult given that it’s non-deterministic; the same inputs will likely give you a different result every time. That’s not necessarily a flaw if they’re correct and consistent, but does make it harder to evaluate, so unless you have an existing tool to compare it to, you have to create a benchmark for evaluating performance.

    “Defining whether something is right or wrong becomes very subjective and difficult to measure,” Bottaro says.

    To evaluate the tool, the team created shared guidelines for what a good response looks like. Similarly, for the Ask Learn API powering Copilot for Azure, Microsoft built a ‘golden dataset’ of representative, annotated questions and answers with reference data for ground truth to test against — and metrics to represent — answer quality.

    Organizations are often more interested in whether they make money than save it by deploying gen AI, notes Rotibi. “I can see this as a productivity capability and an efficiency improvement for my workforce,” she says. “But where am I going to make money as an organization?”

    There’s pressure to demonstrate the uptake of true ROI, Gownder adds, but warns we’re not at that point yet. It may be easier to connect role-specific tools like Copilot for Sales to improvements in conversion rate, deal flow, or the mean time to resolution of a call, but he cautions against assuming a direct causal relationship when there are so many variables.

    Less quantifiable benefits can still be valuable in terms of TCO, though. “Let’s say giving people Copilot not only saves them time, but takes tedious tasks off their plates,” says Gownder. “That could improve their employee experience. We know employee experience benefits tend to lower attrition, and make people more motivated and engaged. There’s a lot of positive productivity from the psychological side of that.”

    But sheer enthusiasm for gen AI and LLMs complicates things, says Bottaro: “We’re faced with a problem of, ‘Let’s find out how to measure value because I definitely want to build it.’ That’s looking at it the wrong way round.” He suggests going back to the same objective function of success metrics you’d use for any product, and being open to the possibility that for some use cases, traditional AI will be good enough.

    Is gen AI failing?

    There are valid questions about where it’s appropriate to adopt gen AI, how to stop users accepting inaccurate answers as irrefutable truths, and the concerning inclusion of both copyright and inappropriate material in training sets. But negative publicity and scaremongering can exaggerate risks and ignore the useful things you can already do if you adopt gen AI responsibly.

    Reported gen AI failures are often as much about irresponsible behavior by users testing boundaries, or organizational failure to launch AI-powered tools to put sufficient guardrails in place, as it is about the inherent issues of the models themselves. Embarrassingly, at one point in 2023, OpenAI’s own $175 million VC fund was under the control of a fake identity, but that appears to be just another example of someone using AI-powered tools to help them with good old-fashioned business fraud.

    Other concerns about gen AI involve deepfakes or simpler digital forgeries, potential legal risks around copyright of data used for the training set, and questions about compliance when using gen AI with sensitive or confidential data.

    As with any cloud model, the notion of shared responsibility is key. AI providers need to supply models and services that are safe to use, but organizations adopting AI services must read the model cards and transparency notes, and test they’re adequately constraining the way they can be used.

    “Some organizations have overextended to the customer with chatbots and realize they’re getting inconsistent answers,” Gownder says. But that doesn’t usually mean abandoning the project. “Maybe they pull it back and try to iterate offline before they launch it to customers,” he adds.

    Organizational maturity in gen AI tends to track maturity in AI generally, and most companies adopting it say it’s helping them invest elsewhere. “They’re investing more in predictive AI, computer vision, and machine learning,” says Gownder. Businesses building their own AI tools are using multiple technologies and treating gen AI as a component rather than a solution.

    The best correction to gen AI hype is to view it as both a groundbreaking technology and just another tool in the toolbox, says Bottaro.

    Artificial Intelligence, Development Tools, Emerging Technology, Generative AI, IT Leadership, Microsoft


  • エリクソンがクラウドへの移行を強く推進した理由

    ハルティン氏(上写真)が4年前にエリクソンのCIOに就任した際、同社は多数の業務委託契約の見直しに乗り出しました。同時に、クラウドサービスVPのヨハン・スポー・レネバーグ氏が率いるクラウドチームは、モダナイゼーションと今後の明確なクラウド戦略を強調しました。 

    「私たちは新たにパートナーの選定とクラウド移行を組み合わせることに決め、最新のコラボレーション構造をどのようにすべきかに多大な努力を払いました。システムの統合とインフラストラクチャを担当するクラウドパートナーが必要であり、当社の役割はエコシステムをまとめることだと理解していました」と氏は語っています。

    どのようなモデルになるか、また各パートナーにどのような要件を課すかを見つけ出すためには長期に及ぶ徹底した調達プロセスが求められ、すべての主要なシステムインテグレーターの参加が必要でした。

    「こうしてインスピレーションを得て、最終モデルを具体化することができました。共同作業によって達成できたのです」と氏は述べました。

    10社以上のパートナー企業が候補に挙がりましたが、最終的にはグローバル共有サービス企業であるHCLが主要パートナーに選ばれました。協力体制を固めて大規模なクラウド移行を開始する段階になって、新型コロナウィルスによるパンデミックが発生し、緊急性が一気に高まりました。

    「『どのように移行するか』から『いかに迅速に移行できるか』を考えなければならなくなったのです」とスポー氏は語っています。

    クラウドへの移行の背景には、新しいテクノロジーをより迅速に特定し、使用したいという要求の高まりに基づいた戦略が大きく関与していました。IT部門が長い間やってきたように、6か月や12か月のリードタイムで実行するのは到底持続できるものではありませんでした。新しいテクノロジーにアクセスし、収益を生み出し、インフラを導入するにはスピードが最優先となってきていたのです。

    事前作業の必要性

    調達プロセス全体と並行して、リスクを中心としたクラウドの下地、および堅実な情報管理と規制コンプライアンスの下地を作る作業が行われました。

    「情報の管理と分類をかなり深くまで行わなければなりません」とハルティン氏は述べています。

    プロセス全体においてはまた、レビューチームが商業的および法律的要素を継続的にモニタリングし、その結果新たな運用モデルが必要となりました。

    「当社はアジャイル作業手法とアジャイルプロダクションを採用しましたから、サービスプロバイダーとの作業を開始した際にはすでに導入されていたのです。基礎を築いたカルチャージャーニーの一環でした。それができていないと、企業は業務の新たな進め方やポリシー、プロセスを受け入れる準備ができていないのです」と氏は述べています。

    野心的な目標

    中核となるアプリケーションの80%をクラウドに移行するという目標も設定されました。

    「全員が正しい考え方を持ち、既存のプロセスやカルチャーに異議を唱えて変えていくためにこの目標を設定しました」と氏は語っています。

    目標は高く掲げましたが、達成可能な範囲とされました。

    「当社は可能性がどのようなものか、かなりしっかり把握していたのです。10社のサプライヤーを試し、どの程度移行できるかの想定を検証しました。クラウドに移行するための技術面での実現可能性、および企業がどの程度移行して管理できるかをテストしたのです。その結果、80%は現実的な数字であると判断しました」

    期待以上の成果

    当初の移行から2年後、現在は全アプリケーションの90%以上がパブリッククラウドに移行されています。全アプリケーションの30%は新規のもので、およそ20%が廃止となりました。

    「オンプレミスに残っている10%は、法的要件または技術的負債によるものです」

    エリクソンのIT部門は、マイクロソフト、AWS、Googleの三大クラウドプロバイダーすべてを使っておよそ半分を使用し、残りの半分は事業外で消費しています。重要な問題はキャパシティやツールに容易にアクセスできる際のコスト管理です。財務プロセスは採用が最も困難なものの1つで、カルチャーを大きく変える必要がありました。

    「コスト管理は、以前はインフラチームの担当でしたが、現在は運用チームの担当となり、かなり多くの管理が必要です。予算制限などの対策もまた使用することができます」とハルティン氏は述べています。

    業務システムの移行

    移行の大部分は業務システムをSAPからクラウドへと移行することでした。これにはおよそ6か月を要しました。

    「当社のSAP環境は世界でも最も規模が大きく、複雑なものでした。非常に大規模な移行だったのです」とスポー氏は語っています。

    成功に向けて、すべてのパートナーと密に連携し、プラニングが行われました。

    「専門家と前向きに協力することが成功の要因でした。私たちは、SAPはAWSクラウドで問題なく作動することを知っていたのです」とハルティン氏は述べています。

    300人以上がシフト制で働き、コアシステムの移行は1回の週末で完了しました。綿密なプラニングが功を奏したのです。

    「翌週の火曜日に財務担当者から『今週末に移行する時は1時間おきに電話をしてくれ。問題があったらすぐ知りたいから』と言われた時には、移行はすでに完了したことを説明しなければなりませんでした」とスポー氏は語っています。

    スピードの重要性

    ハルティン氏は、エリクソンが行ったような迅速な移行は成功の手本だと信じています。

    「他のインフラ戦略は全く意に介しませんでした。考えているよりもはるかに迅速に移行を完了できるのです。私たちはかなりきついスケジュールを立て、やや強制的に作業を進めました。しかし、クラウド移行を50%以上完了すれば、IT組織全体が変わります。ダラダラ延ばせば延ばすほど、より困難なプロセスになるでしょう」と氏は語っています。

    またコストだけに目を向けるのは十分ではないと述べています。

    「プロジェクト開始時はコスト削減も念頭にありましたが、より全体を見るようにしたのです」と述べ、それを考えるとクラウドのビジネスはいま、ITとビジネスが新しい形で同期し、よりまとまりができたと付け加えています。

    特にコスト節減につながるのは、インフラとツールへのアクセスを得ることです。

    「AIのような新規テクノロジーが進出した時に、すぐ利用できます。自分たちで数百万ドルを投資するよりも、クラウドプロバイダーによる何十億という投資を活用することができるのです。どちらが楽かを理解するのは難しくありません」とハルティン氏は語っています。

    Cloud Computing, Enterprise Applications, IT Strategy