Microsoft Teams has just turned 2. To celebrate, new features have been announced and are coming your way soon. With this wave of new features there aren’t many reasons left not to adopt Microsoft Teams. Many of our customers are embracing Teams as they see the value in a connected collaboration experience that brings together voice, chat, content & meetings.
For me, nothing beats a face-to-face meeting. Though, as people embrace flexible working, are geographically distributed or constantly on the go, connecting with others can be challenging. A recent focal point for many has been creating a connected meeting experience that brings physical & virtual spaces together. I’ve heard stories and experienced first-hand the challenges with traditional meeting room technology. Dial-ins never work, people can’t see the what’s being presented in the physical room, it’s difficult to hear people on the phone, people don’t have the right version of the software…the list of frustrations is long.
The announcement has lots of new features that enhances a range of experiences though, I want to focus on the ones I think enhance the meting experience.
Whilst still in preview, this is a hidden gem. Microsoft Whiteboard allows people to quickly draw and share in real-time. No physical whiteboards needed. Better yet the drawings are automatically saved and easily available when you need to revisit. Taking it one step further – those lucky enough to have Surface Hub, tighter Teams integration is coming your way. There’s lots of interest in Surface Hub and worthwhile checking out.
New Calendar App
The current Meetings Tab is being renamed to Calendar and lots of updates are coming. You’ll be able to join, RSVP, cancel or decline meetings from the right click-menu. You’ll be able to see a range of views including weekly, daily or work week which will honour your settings in Outlook. So no need to switch to Outlook as often. You have to wonder – will Teams eventually replace Outlook? I think so.
New Meeting Devices
As Teams grows, so does the maturity of hardware. There are some great new Teams Certified devices from AudioCodes, Crestron, HP, Jabra, Lenovo, Logitech, Plantronics/Polycom & Sennheiser. Check out the Teams Marketplace to buy and start trialling.
For those people who like to still use white boards, this feature is for you! Not only can you now add a second camera to your meeting, it digitises the physical whiteboard. The intelligent capture ensures your white board drawings are still visible even as your draw, check out below to see it in action –borrowed from Microsoft Announcement.
Hopefully that’s given you a quick overview of the great new features coming to Microsoft Teams soon.
Recent consulting engagements have found me helping customers define what Office365 means to them & what value they see in its use. They are lucky to have licenses and are seeking help to understand how they drive value from the investment.
You’ve heard the sales pitches: Office365 – The platform to solve ALL your needs! From meetings, to document management, working with people outside your organisation, social networking, custom applications, business process automation, forms & workflow, analytics, security & compliance, device management…the list goes on and is only getting bigger!
When I hear Office365 described – I often hear attributes religious people give to God.
It’s everywhere you go – Omnipresent
It knows everything you do – Omniscient
It’s so powerful it can do everything you want – Omnipotent
It’s unified despite having multiple components – Oneness
It can punish you for doing with the wrong thing – Wrathful
It’s taking on a persona – how long before it becomes self-aware!?
If it can really meet ALL your needs, how do we define its use, do we just use it for everything? Where do we start? How do we define what it means if it can do everything?
Enter limitation. Limitation is a powerful idea that brings clarity through constraint. It’s the foundation on which definition is built. Can you really define something that can do anything?
The other side would suggest limiting technology constrains thinking and prevents creativity. I don’t agree. Limitation begets creativity. It helps zero-in thinking and helps create practical, creative solutions with what you have. Moreover, having modern technology doesn’t make you a creative & innovative organisation. It’s about culture, people & process. As always, technology is a mere enabler.
Sometimes its easier to start here. Working with Architecture teams to draw boundaries around the system helps provide guidance for appropriate use. They have a good grasp on enterprise architecture and reasons why things are the way they are. It helps clearly narrow use cases & provides a definition that makes sense to people.
We don’t use it to share content externally because of..
We can’t use it for customer facing staff because of…
We don’t use it for Forms & Workflow because we have <insert app name here>
We can’t don’t use it as a records management system because we have …
Microsoft provide some great material on generic use cases. Document collaboration, secure external sharing, workflow, managing content on-the-go, making meetings more efficient etc. These represent ideals and are sometimes too far removed from the reality of your organisation. Use them as a basis and further develop them with relevance to your business unit or organisation.
Group ideation workshops, discussions & brainstorming sessions are a great way to help draw out use cases. Make sure you have the right level of people, not too high & not too low. You can then follow-up with each and drill in to the detail and see the value the use case provides.
Get some runs on the board
Once you’ve defined a few use cases, work with the business to start piloting. Prove the use case with real-life scenarios. Build the network of people around the use cases and start to draw out and refine how it solves pain, for here is where true value appears. This can be a good news story that can be told to other parts of the business to build excitement.
Plan for supply & demand
Once you some have runs on the board, if successful, word will quickly get out. People will want it. Learn to harness this excitement and build off the energy created. Be ready to deal with sudden increase in supply.
On the demand side, plan for service management. How do people get support? Who support it? How do we customise it? What the back-out plan? How do we manage updates? All the typical ITIL components you’d expect should be planned for during your pilots.
Office365 Roadmap to remove limitations & create new use cases
They are a meaningful way to communicate when value will be unlocked. IT should have a clear picture of business value is and how it will work to unlock the capabilities the business needs in order for it to be successful.
Roadmaps do a good at communicating this. Though typically, they are technology focused. This might be a great way to help unify the IT team, but people on the receiving end wont quiet understand. Communicate using their language in ways they understand i.e. what value it will provide them, when & how it will help them be successful.
People struggle to find meaning in life. Our place in the world. The value we provide. Our political persuasions allow us to either rethink definitions or preserve our traditions & institutions. Funnily enough, this philosophical divide plays out in technology all the time.
The morbid question – Is the intranet dead? – seems to be popping its head up recently. I get it, the world is evolving, our expectations of ‘digital’ has changed and technology is at a point where it’s no longer a barrier to seizing opportunity.
Socialists would argue the traditional Intranet is dead or at least facing an identity crisis. The word shouldn’t be used or should significantly be redefined to reflect modern times.
Conservatives would argue there is no crisis, the Intranet is fine as is and should not change its definition. Doing so will shake the digital foundations of any modern enterprise.
It all comes back to meaning; what do we mean when we say Intranet?
The Google Definition: “a local or restricted communications network, especially a private network created using World Wide Web software.”
Definition sets the baseline. It helps us see understand if any change is required. Here are some insights from the definition.
Local or restricted
The core here is identity. Identity brings utility. Identity powers relevance. Who should have access? What content should they see? Access is historically defined by organisational boundaries i.e. All staff should have access, but not contractors or service providers.
Gone are the days when the intranet didn’t know who you are.
Communication + Network
The foundations of an intranet. Communication is arguably the most important aspect. It’s evidenced by Corporate Communications teams having ownership and being influential when it comes to decision making. It’s viewed as the primary communications network as it’s available to everyone. It’s there when you login. It’s there when you open your browser.
Metcalf’s law is relevant here. The value of the system is proportional to the square-root of connections. Meaning – the more systems, people & content a person is connected to, the higher the value. Pre API-Revolution, meaningful integration was but a dream. We are now seeing the value of APIs – Data is the digital currency which flows through APIs.
Intranets need to become more networked. More connected. More relevant.
It does not limit how it’s consumed – It doesn’t specify browser, device, time or even place.
It limits how its created – Only World Wide Web software? So 2000. Conceptually this type of software is used, but its not limited to it.
It doesn’t limit evolution: My favourite. It shouldn’t be left to deteriorate whilst others evolve.
It doesn’t encapsulate trends: These days we hear about intelligence, digital workplace, responsive, <insert buzz word here>
It doesn’t limit ownership; What are the roles & responsibilities of each group involved? What’s the hierarchy & decision-making process?
I favour evolution, not revolution. Definitions need to strike the right balance between ambiguity and being too explicit. We can’t lose sight of the core – Communications, Network & Identity. I do think we should drop the WWW part. Here’s the new definition:
“a local or restricted communications network, especially a private network”
Beyond the Intranet?
You may think it’s a boring adjustment. To me it provides the right amount of ambiguity that allows companies to give meaning to their intranet, keep a close eye on trends and deliver something that helps people be a valued part of the network.
At its core the intranet is not dead. We have evolved the meaning as technology has enabled different opportunities. These trends are just ways of delivering the core definition in more eccentric ways.
Perhaps a new label is required? Digital Workplace? Digital Front door? Modern Workplace? Label it what you will, the definition is still the same.
How to get started –5 Step’s to sustained success
So how do you delivery something? Process is important as it ensures focus. These 5 steps are what I typically. I’ll do a follow up blog on this process.
Striving to be better at what you do is important for your development. Though, it typically translates into developing what you know rather than how you act. For consulting (or any job), there are two parts to the equation; Hard Skills & Soft skills. Balance is needed so you should learn to develop both. I aim to help people develop their softskills. They are typically harder to define and require more attention. Below are concepts I work on developing every day and hopefully you can take some away and start developing them for yourself.
Quality builds trust which creates opportunity
The link between quality & trust is easy to understand. When a relationship or engagement is new, you must prove yourself. The best way is to deliver something of superior quality. Whether that’s a presentation, application or a document. Do what you can to make it a quality output. It can be difficult to define what the quality standard should be so it’s important to set this upfront.
Delivering quality is the best way to build trust however, being aware of when trust it exists is challenging. Know where you are at on the spectrum. It’s not as easy as asking ‘do you trust me’. Start with small tests to gauge where you are, build up to something bigger. Once trust is established, only then can you start being opportunistic and by that, I mean, challenge peoples thinking, pitching ideas, pitching for more work. If it doesn’t exist, work on getting it.
Understand when to focus on delivering quality versus being opportunistic
Own your brand
This is how people see you. Your actions, traits & values have a direct correlation to your brand. What you are known for? How effective do people think you are? How well do you know your domain of expertise? At some point, people will talk about you. Managers, customers or colleagues both past & future. These conversations, ones you aren’t involved in, define your brand and it’s important you own it.
What do you want to be known for?
Keep your commitments
Sounds simple enough. What you agree to in meetings, quick conversations or any other discussion. Manage them, follow up on them & keep on top of them. Let people know where they are up to. Don’t ignore them. Often, we forget the small things we commit to. I’ve found it’s delivering on these small things that go a long way in building quality relationships. People tend not to forget if you let them down.
Diligence is important to your brand. Avoid being that person who can’t keep commitments.
Embrace adversity, build your resilience
Before getting to this one, I’ll say that work can be tough. Mental health is far more important than any job you will ever work. Know your limits. If you are in need support, please seek it. Most companies have an assistance programs available so contact your manager or HR representative if you are feeling overwhelmed.
Something always goes wrong. You project isn’t delivering quality, a relationship is damaged, you can’t get something signed off or you’ve just gone live and everything is on fire. For me, building resilience has been key to being successful in consulting. When things aren’t going well it’s difficult to get motivated, relationships are left in the balance, and you probably want to give up. I believe it’s in these tough moments, our true character really comes out. Do you pull out all stops to get things back on track? Do you give up? How do you respond to these situations?
Adversity defines character but know your limits
Respond, don’t react
Passion is a beautiful thing. When harnessed and used in the right way it can lead to amazing things. We get passionate about what we create or are heavily involved in, so when things don’t go your way it’s very easy to get frustrated & annoyed. In these situations, it’s important to not let your emotions guide your reaction. They always manifest in negative ways. You become short, you get agitated, frustrated quicker and if left unchecked can impact the work you deliver.
Don’t write that email. Avoid confronting that person. Go and take time to think about a response.
Play the ball. Not the person.
Stay true to your values
What do you stand for? What’s the right thing to do? Morally & ethically, these are difficult questions to answer. People that know & live through their values are more content with their work & personal lives. Understanding these goes way beyond anything you can do at work. I’m of the view that your values are generally set by age 6 and from that point develop & mature. Work to identify what your values & seek work that aligns with them. When personal values don’t align with professional, it leads to a world of pain.
Hopefully this list can help you sharpen your softer skills & make you a more effective consultant.
It’s one thing to convert a conversation around a broad scope of work into a well-defined and articulated, 3 to 4-page proposal (sometimes 20 +, depending on whose template you’re using), it’s another thing for a client or customer to read through this document, often, multiple times due to a review and response cycle, before finally agreeing to it.
Most don’t enjoy this process. Client stakeholders usually look for a few key things when it comes to the SOW: price, time (hours) and key dates. Other parts are usually skimmed over or can be missed altogether, at least in my experience.
While the above might be nothing new, perhaps it’s time to ask ourselves whether there’s a better way of doing this – can the client business owner, or nominated stakeholder on behalf of the business owner, be more involved, collaboratively in the SOW writing process so that unified goal between supplier and customer be achieved?
Recent engagements with various stakeholders have made me realise, as a business analyst, how crucial this aspect of the project is and can, at times, be a sore point to reference back to when the project is in-flight, and expectations don’t align with what is in writing. Therefore, entering an engagement with the mindset of getting semantics right from the get go might save from hindering any quality to delivering down the line.
Usually engaging in potential work with a client involves a conversation – not a sales pitch, just simply talking it through. What follows from here for effective SOW writing is what underpins any good collaborative effort – a channel of clear and responsive communication.
It’s best to idealise this process in stages:
After initial discussion, an email to prospective client containing a skeleton SOW that simply outlines the conversation that was had. This reaffirms the context and conveys listening and understanding from you as the potential solution/service provider. If the engagement is with a new client, convey some understanding of the context around the company
Avoid fluffing it out with proposal-sounding adjectives and dialogue, keep no-nonsense and to-the-point
Work with the client to clearly define what is expected to be delivered and how long it could potentially take, based on flexibility or constraints on resources for both sides
Define what it will cost based on all the known variables – avoid basing this on ambiguity or pre-gap analysis of the outlined work at this stage.
Add value to the SOW by considering if there’s a better way to do the proposed work. This is something I’ve found that Kloud always maintains when approaching a piece of work
By defining the ‘pre-sales’ in this way, and by communicating effectively with your client during a proposal, the ‘joyless’ experience of writing a SOW (as it can be commonly perceived) may be alleviated and player a smaller role in convincing your client to work with you.
It’s refreshing to view this process unlike a proposal, but rather a conversation. After the discovery call, we should establish confidence in the client with us as consultants. The only thing left to deal with now is the project itself.
Taking my inspiration from Comedy Central, the Yammer Roast is a forum in which we can directly address resistances around Yammer, its role, and past failures in retrospect. Some of my clients have tried with Yammer and concluded that for various reasons it’s failed to take hold. For some the value is clear and it’s a case of putting a compelling approach and supporting rationale to sponsors and consumers who remain sceptical. For others, they are looking for a way to make it work in their current collaboration landscape. The Yammer roast is designed to tease out, recognise and address key resistances. It’s not an opportunity to blindly evangelise Yammer; it’s an exercise in consulting to provide some clarification around Yammer as a business solution, and what’s needed for a successful implementation. In this article, I’ll cover some of the popular resistances aired at Yammer Roasts, why these resistances exist and how you can address them. If you’re an advocate for social networking in your own organisation, my hope is that this can inform your own discussion.
We have concerns over impropriate usage, distraction from proper work
We’ve got Yammer and no-one is using it.
…but your partners and vendors are and they’re looking to collaborate with you. Stagnant networks; a common scenario. Your organisation may be looking at alternative platforms as a way to reset/relaunch. Here, you lament the lack of tangible, measurable business outcomes at the outset of the initial rollout or the lack of investment in change management activities to help drive adoption of the platform. You’ll smile and nod sagely, and perhaps talk to a view similar to the following: But, for whatever reason, you’re here. So how can past experiences inform future activities? Whether you use Yammer or not, the success of your social network in its infancy is dependent on measurable business outcomes. Without the right supporting campaign, a way to track adoption and a way to draw insight from usage, you effectively roll the dice with simply ‘turning it on’. Initiatives around Yammer can start small with a goal of communicating the success (of a process) and subsequently widening its application within your business. Simply swapping out the technology without thinking about the business outcome may renew interest from sponsors who’ve lost faith in the current product, but you risk a rinse and repeat scenario. “But we’re dependent on executive sponsorship!” I hear you lament. This is a by-product of early boilerplate change campaigns, where success somehow rested on executives jumping in to lead by example. Don’t get me wrong, it’s great when this happens. From my perspective, you need any group within your business with a value use case and the willingness to try. You have O365, the technology is there. You can consider the Yammer client to not just be a portal into your network, but the networks of your vendors and partners. Access to your partner/vendor product teams (via Yammer External Networks) and being able to leverage subject matter expertise from them and the wider community is a compelling case in the absence of an internal use case. Combatting any negative perceptions of your social network following a failure to launch is all about your messaging, and putting Yammer’s capability into a wider context, which leads me to…
But we’re using Teams, Slack, Jabber, Facebook for Workplace (delete as appropriate)
Feature parity – it can be a head scratcher. “But we can collaborate in Skype. And Teams! And Yammer! And via text! What will our users do?” Enterprise architects will be advocating the current strategic platform in the absence of a differentiator, or exception. Your managed services team will be flagging additional training needs. There will be additional overheads. If you’re there to champion Yammer in the face of an incumbent (or competing) solution, you need to adopt the tried and tested approach which is 1. Identify the differentiator and align the new technology (i.e. Yammer) to it, 2. Quantify the investment, and 3. Outline the return on investment. As a consultant my first conversations are always focused around the role Yammer will play in your organisation’s collaboration landscape. The objective is to ensure initial messaging about Yammer will provide the required clarity and context. This reminds me of an engagement some time ago; an organisation with a frontline workforce off the radar forming working groups in Facebook. “We aren’t across what’s going on. We need to bring them over to Yammer.” Objective noted, but consider the fact that a) these users have established their networks and their personal brand, b) they are collaborating in the knowledge that big brother isn’t watching. Therefore, there’s no way in hell they’ll simply jump ship. The solution? What can you provide that this current solution cannot? Perhaps the commitment to listen, respond and enact change. The modern digital workplace is about choice and choice is okay. Enable your users to make that informed decision and do what is right for their working groups.
It’s another app. There’s an overhead to informing and educating our business.
Of course there is. This is more around uncertainty as to the strategy for informing and educating your business. Working out the ‘what’s in it for me?’ element. There is a cost to getting Yammer into the hands of your workforce. For example, from a technical perspective, you need to provide everyone with the mobile app (MAM scenarios included) and help users overcome initial sign-in difficulties (MFA scenarios included). Whatever this may cost in your organisation, your business case needs to provide a justification (i.e.) return on that investment. Campaign activities to drive adoption are dependent on the formal appointment of a Community Manager (in larger organisations), and a clear understanding around moderation. So you do need to create that service description and governance plan. I like to paint a picture representing the end state – characteristics of a mature, self-sustaining social network. In this scenario, the Yammer icon sits next to Twitter, Instagram, Facebook on the mobile device. You’re a click away from your colleagues and their antics. You get the same dopamine rush on getting an alert. It’s click bait. God forbid, you’re actually checking Yammer during the ad-break, or just before bed time. Hang on, your employee just pointed someone in the right direction, or answered a question. Wait a second! That’s voluntarily working outside of regular hours! Without pay!
Yammer? Didn’t that die out a few years ago?
You’ve got people who remember Yammer back in the days before it was a Microsoft product. Yammer was out there. You needed a separate login for Yammer. There were collaboration features built into Microsoft’s SharePoint platform but they sucked in comparison, and rather than invest in building competitive, comparative features into their own fledgling collaboration solution, Microsoft acquired Yammer instead. Roll out a few months, and there’s the option to swap out social/newsfeed features in SharePoint for those in Yammer, via the best possible integration at the time (which was essentially link replacement). Today, with Office 365, there’s more integration. Yammer has supported O365 sign-in for a couple of years now. Yammer functions are popping up in other O365 workloads. A good example is the Talk about this in Yammer function in Delve, which then frames the resulting conversation from Yammer within the Delve UI: From an end user experience perspective there is little difference between Yammer now and the product it was pre-Microsoft acquisition, but the product has undergone significant changes (external groups and networks for example). Expect ongoing efforts to tighten integration with the rest of the O365 suite, understand and address the implications of cutting-off that functionality. The Outcome Yammer (or your social networking platform of choice) becomes successful when it demonstrates a high value role in driving your organisation’s collaborative and social culture. In terms of maturity we’re taking self-sustaining, beyond efforts to drive usage and lead by example. Your social network is an outlet for everyone in your organisation. People new to your organisation, will see it as a reflection of your collaborative and social culture; give them a way to connect with people and immediately contribute in their own way. It can be challenging to create such an outlet where the traditional hierarchy is flattened, where everyone has a voice (no matter who they are and where they sit within the organisation). Allowing online personalities to develop without reluctance and other constraints (“if it’s public, it’s fair game!”) will be the catalyst to generating the relationships, knowledge, insight (and resulting developments) that will improve your business.
Health care systems often face challenges in the way of being unkept and unmaintained or managed by too many without consistency in content and harbouring outdated resources. A lot of these legacy training and development systems also wear the pain of constant record churning without a supportable record management system. With the accrual of these records over time forming a ‘Big Data concern’, modernising these eLearning platforms may be the right call to action for medical professionals and researchers. Gone should be the days of manually updating Web Vista on regular basis. Cloud solutions for Health Care and Research should be well on its way, but the better utilisation of these new technologies will play a key factor in how confidence is invested by health professionals in IT providing a means for departmental education and career development moving forward. Why SharePoint Makes Sense (versus further developing Legacy Systems) Every day, each document, slide image and scan matters when the paying customer’s dollar is placed on your proficiency to solve pressing health care challenges. Compliance and availability of resources aren’t enough – streamlined and collaborative processes, from quality control to customer relationship management, module testing and internal review are all minimum requirements for building a modern, online eLearning centre i.e. a ‘Learning Management System’. ELearningIndustry.com has broken down ten key components that a Learning Management System (LMS) requires in order to be effective. From previous cases, working in developing an LMS, or OLC (Online Learning Centre) Site using SharePoint, these ten components can indeed be designed within the online platform:
Strong Analytics and Report Generation – for the purposes of eLearning, e.g. dashboards which contain progress reports, exam scores and other learner data, SharePoint workflows allows for progress tracking of training and user’s engagement with content and course materials while versioning ensures that learning managers, content builders (subject matter experts) and the learners themselves are on the same page (literally).
Course Authoring Capability – SharePoint access and user permissions are directly linked to your Active Directory. Access to content can be managed, both from a hierarchical standpoint or role-based if we’re talking content authors. Furthermore, learners can have access to specific ‘modules’ allocated to them based on department, vocation, etc.
Scalable Content Hosting – flexibility of content or workflows, or plug-ins (using ‘app parts’) to adapt functionality to welcome new learners where learning requirements may shift to align with organisational needs.
Certifications – due to the availability and popularity of SharePoint online in large/global Enterprises, certifications for anywhere from smart to super users is available from Microsoft affiliated authorities or verified third-parties.
Integrations (with other SaaS software, communication tools, etc.) – allow for exchange of information through API’s for content feeds and record management e.g. with virtual classrooms, HR systems, Google Analytics.
Community and Collaboration – added benefit of integrated and packaged Microsoft apps, to create channels for live group study, or learner feedback, for instance (Skype for Business, Yammer, Microsoft Teams).
White Labelling vs. Branding – UI friendly, fully customisable appearance. Modern layout is design flexible to allow for the institutes branding to be proliferated throughout the tenant’s SharePoint sites.
Mobile Capability – SharePoint has both a mobile app and can be designed to be responsive to multiple mobile device types
Customer Support and Success – as it is a common enterprise tool, support by local IT should be feasible with any critical product support inquiries routed to Microsoft
Support of the Institutes Mission and Culture – in Health Care Services, where the churn of information and data pushes for an innovative, rapid response, SharePoint can be designed to meet these needs where, as an LMS, it can adapt to continuously represent the expertise and knowledge of Health Professionals.
Outside of the above, the major advantage for health services to make the transition to the cloud is the improved information security experience. There are still plenty of cases today where patients are at risk of medical and financial identity fraud due to inadequate information security and manual (very implicitly hands-on) records management processes. Single platform databasing, as well as the from-anywhere accessibility of SharePoint as a Cloud platform meets the challenge of maintaining networks, PCs, servers and databases, which can be fairly extensive due to many health care institutions existing beyond hospitals, branching off into neighbourhood clinics, home health providers and off-site services.
A rite of passage for the majority of us in the tech consultancy world is being a part of a medium to large scale data migration at some stage in our careers. No, I don’t mean dragging files from a PC to a USB drive, though this may have very well factored into the equation for some us. What I’m referencing is a planned piece of work where the objective is to move an entire data set from a legacy storage system to a target storage. Presumably, a portion of this data is actively used, so this migration usually occurs during a planned downtime period, ad communication strategy, staging, resourcing, etc. Yes, a lot of us can say ‘been there, done that’. And for some us, it can seem simple when broken down as above. But what does it mean for the end user? The recurring cycle of change is never an easy one, and the impact of a data migration is often a big change. For the team delivering it can be just as stress-inducing – sleepless shift cycles, outside of hours and late-night calls, project scope creeping (note: avoid being vague in work requests, especially when it comes to data migration work), are just a few of the issues that will shave years off anyone who’s unprepared for what a data migration encompasses. Back to the end-users, it’s a big change: new applications, new front-end interfaces, new operating procedures and a potential shake-up of business processes, and so on. Most opt and agree with the client to taper off the pain of the transition/change period, ‘rip the Band-Aid right off’ and move an entire dataset from one system to another in one operation. Sometimes, and dependent on context/platforms, this is a completely seamless exercise. The end user logs in on a Monday and they’re mostly unaware of a switch. Whether taking this, or a phased approach to the migration, there are signs showing in today’s technology services landscape that these operations are aging and become somewhat outdated. Data Volumes Are Climbing… … to put it mildly. We’re in a world of Big Data, and this isn’t only for Global Enterprises and Large Companies, but even mid-sized ones and for some individuals too. Weekend downtimes aren’t going to be enough – or aren’t, as this BA discovered on a recent assignment – and when your data amounts aren’t equitable to the actual end users you’re transitioning (the bigger goal is, in my mind, the transformation of the user experience in fact), then you’re left with finite amounts of time to actually perform tests, gain user acceptance, plan and strategise for mitigation and potential rollback. Migration through Cloud Platforms are not yet well-optimized for effective (pain-free) Migrations Imagine you have a billing system that contains somewhere up to 100 million fixed assets (active and backlog). The requirement is to migrate these all to a new system that is more intuitive to the accountants of your business. On top of this, the app has a built-in API that supports 500 asset migrations a second. Not bad, the migration will, therefore, take just under 20 days to complete. Not optimal for a project, no matter how much planning goes into the delivery phase. On top of this, consider the slowing down of performance due to user access going through an API or load gateway. Not fun. What’s the Alternative? In a world where we’re looking to make technology and solution delivery faster and more efficient, the future of data migration may, in fact, be headed in the opposite direction. Rather than phasing your migrations over outage windows of days or weeks, or from weekend-to-weekend, why not stretch this out to months even? Now, before anyone cries ‘exorbitant bill-ables’, I’m not suggesting that the migration project itself be drawn out for an overly long period of time (months, a year). No, the idea is not to keep a project team around for unforeseen, yet to-be-expected challenges that face them as previously mentioned above. Rather, as tech and business consultants and experts, a possible alternative is redirecting our efforts towards our quality of service, to focus on change management aspect with regards to end-user adoption of a new platform and associated process, and the capability of a given company’s managed IT serviced too, not only support the change but in fact incorporate the migration into as a standard service offering. The Bright(er) Future for Data Migrations How can managed services support a data migration, without specialisation in, say, PowerShell scripting or experience in performing a migration via a tool or otherwise, before? Nowadays we are fortunate enough that vendors are developing migration tools to be highly user-friendly and purposed for ongoing enterprise use. They are doing this to shift the view that a relationship with a solution provider for projects such as this should simply be a one-off, and that the focus on migration software capability is more important than the capability of the resource performing the migration (still important, but ‘technical skills’ in this space becoming more of a level playing field). From a business consultancy angle, an opportunity to provide an improved quality of service is presented by looking at ways in which we can utilise our engagement and discovery skills to bridge the gaps which can often be prevalent between managed services and an understanding of the businesses everyday processes. A lot of this will hinge on the very data being migrated. This can onset positive action from a business given time and with full support from managed services. Data migrations as a BAU activity can become iterative and via request; active and relevant data first, followed potentially by a ‘house-cleaning’ activity where the business effectively de clutters data which it no longer needs or is no longer relevant. It’s early days and we’re likely still toeing the line between old data migration methodology and exploring what could be. But ultimately, enabling a client or company to be more technologically capable, starting with data migrations, is definitely worth a cent or two.
Sharegate supports PowerShell scripting which can be used to automate and schedule migrations. In this post, I am going to demonstrate an example of end to end automation to migrate network Shares to SharePoint Online. The process effectively reduces the task of executing migrations to “just flicking a switch”.
The following pre-migration activities were conducted before the actual migration:
Analysis of Network Shares
Discussions with stakeholders from different business units to identify content needs
Pilot migrations to identify average throughput capability of migration environment
Identification of acceptable data filtration criteria, and prepare Sharegate migration template files based on business requirements
Derive a migration plan from above steps
Migration Automation flow
The diagram represents a high-level flow of the process:
The migration automation was implemented to execute the following steps:
Migration team indicates that migration(s) are ready to be initiated by updating the list item(s) in the SharePoint list
Updated item(s) are detected by a PowerShell script polling the SharePoint list
The list item data is downloaded as a CSV file. It is one CSV file per list item. The list item status is updated to “started”, so that it would not be read again
The CSV file(s) are picked up by another migration PowerShell script to initiate migration using Sharegate
The required migration template is selected based on the options specified in the migration list item / csv to create a migration package
The prepared migration task is queued for migration with Sharegate, and migration is executed
Information mails are “queued” to be dispatched to migration team
Emails are sent out to the recipients
The migration reports are extracted out as CSV and stored at a network location.
The following software components were utilized for the implementing the automation:
Master and Migration terminals hosted as Virtual machines – Each terminal is a windows 10 virtual machine. The use of virtual machines provides following advantages over using desktops:
VMs are generally deployed directly in datacenters, hence, near the data source.
Are available all the time and are not affected by power outages or manual shutdowns
Can be easily scaled up or down based on the project requirements
Benefit from having better internet connectivity and separate internet routing can be drawn up based on requirements
Single Master Terminal – A single master terminal is useful to centrally manage all other migration terminals. Using a single master terminal offers following advantages:
Single point of entry to migration process
Acts as central store for scripts, templates, aggregated reports
Acts as a single agent to execute non-sequential tasks such as sending out communication mails
Multiple Migration terminals – It would be optimal to initiate parallel migrations and use multiple machines (terminals) to expedite overall throughput by parallel runs in an available migration window (generally non-business hours). Sharegate has option to use either 1 or 5 licenses at once during migration. We utilized 5 ShareGate licenses on 5 separate migration terminals. PowerShell Remoting – Using PowerShell remoting allows opening remote PowerShell sessions to other windows machines. This will allow the migration team to control and manage migrations using just one terminal (Master Terminal) and simplify monitoring of simultaneous migration tasks. More information about PowerShell remoting can be found here. PowerShell execution policy – The scripts running on migration terminals will be stored at a network location in Master Terminal. This will allow changing / updating scripts on the fly without copying the script over to other migration terminals. The script execution policy of the PowerShell window will need to be set as “Bypass” to allow execution of scripts stored in network location (for quick reference, the command is “Set-ExecutionPolicy -ExecutionPolicy Bypass”. Windows Scheduled Tasks – The PowerShell scripts are scheduled as tasks through Windows Task schedulers and these tasks could be managed remotely using scripts running on the migration terminals. The scripts are stored at a network location in master terminal.
Basic task in a windows scheduler
PowerShell script file configured to run as a Task
Master terminal (Manage migrations)
2 cores, 4 GB RAM, 100 GB HDD
Used for managing scripts execution tasks on other terminals (start, stop, disable, enable)
Used for centrally storing all scripts and ShareGate property mapping and migration templates
Used for Initiating mails (configured as basic tasks in task scheduler)
Used for detecting and downloading migration configuration of tasks ready to be initiated (configured as basic tasks in task scheduler)
Windows 10 virtual machine installed with the required software.
Script execution policy set as “Bypass”
Migration terminals (Execute migrations)
8 cores, 16 GB RAM, 100 GB HDD
Used for processing migration tasks (configured as basic tasks in windows task scheduler)
Multiple migration terminals may be set up based on the available Sharegate licenses
Windows 10 Virtual machines each installed with the required software.
Activated Sharegate license on each of the migration terminals
PowerShell remoting needs to be enabled
Script execution policy set as “Bypass”
Initiate queueing of Migrations
Before migration, migration team must perform manual pre – migration tasks (if any as defined by the migration process as defined and agreed with stake holders). Some of the pre-migration tasks / checks may be:
Inform other teams about a possible network surge
Confirming if another activity is consuming bandwith (scheduled updates)
Inform the impacted business users about the migration – this would be generally set up as the communication plan
Freezing the source data store as “read-only”
A list was created on a SharePoint online site to enable users to indicate that the migration is ready to be processed. The updates in this list shall trigger the actual migration downstream. The migration plan is pre-populated in this list as a part of migration planning phase. The migration team can then update one of the fields (ReadyToMigrate in this case) to initiate the migration. Migration status is also updated back to this list by the automation process or skip a planned migration (if so desired). The list provides as a single point of entry to initiate and monitor migrations. In other words, we are abstracting out migration processing with this list and can be an effective tool for migration and communication teams. The list was created with the following columns:
Destination Library => Destination library on the site
Ready to migrate => Indicates that the migration is ready to be triggered
Migrate all data => Indicate if all data from the source is to be migrated (default is No). Only filtered data based on the predefined options will be migrated. (more on filtered options can be found here)
Started => updated by automation when the migration package has been downloaded
Migrated => updated by automation after migration completion
Terminal Name => updated by automation specifying the terminal being used to migrate the task
Migration configuration list
After the migration team is ready to initiate the migration, the field “ReadyToMigrate” for the migration item in the SharePoint list is updated to “Yes”.
“Flicking the switch”
Script to create the migration configuration list The script below creres the source list in sharepoint online.
Script to store credentials The file stores the credentials and can be used subsequent scripts.
Queuing the migration tasks
A PowerShell script is executed to poll the migration configuration list in SharePoint at regular intervals to determine if a migration task is ready to be initiated. The available migration configurations are then downloaded as CSV files, one item / file and stored in a migration packages folder on the master terminal. Each CSV file maps to one migration task to be executed by a migration terminal and ensures that the same migration task is not executed by more than one terminal. It is important that this script runs on a single terminal to ensure only one migration is executed for one source item.
The downloaded migration configuration CSV files are detected by migration script tasks executing on each of the migration terminals. Based on the specified source, destination and migration options the following tasks are executed:
Reads the item from configuration list to retrieve updated data based on item ID
Verify Source. Additionally, sends a failure mail if source is invalid or not available
Revalidates if a migration is already initiated by another terminal
Updates the “TerminalName” field in the SharePoint list to indicate an initiated migration
Checks if the destination site is created. Creates if not already available
Checks if the destination library is created. Creates if not already available
Triggers an information mail informing migration start
Loads the required configurations based on the required migration outcome. The migration configurations specify migration options such as cut over dates, source data filters, security and metadata. More about this can be found here.
Initiates the migration task
Extracts the migration report and stores as CSV
Extracts the secondary migration report as CSV to derive paths of all files successfully migrated. These CSV can be read by an optional downstream process.
Triggers an information mail informing migration is completed
Checks for another queued migration to repeat the procedure.
The automatioin script is given below –
The script triggers emails to required recipients. This script polls a folder: ‘\masterterminal\c$\AutomatedMigrationData\mails\input’ to check any files to be send out as emails. The csv files sepcify subject and body to be send out as emails to recipients configured in the script. Processed CSV files are moved to final folder.
Manage migration tasks (scheduled tasks)
The PowerShell script utilizes PowerShell remoting to manage windows Task scheduler tasks configured on other terminals.
The migration automation process as described above helps in automating the migration project and reduces manual overhead during the migration. Since the scripts utilize pre-configured migration options / templates, the outcome is consistent with the plan. Controlling and monitoring migration tasks utilizing a SharePoint list introduces transparency in the system and abstracts the migration complexity. Business stakeholders can review migration status easily from the SharePoint list and this ensures an effective communication channel. Automated mails informing about migration status provide additional information about the migration. The migration tasks are executed in parallel across multiple migration machines which aids in a better utilization of available migration window.
Why Continual Service Improvement (CSI) is Required?
The goal of Continual Service Improvement (CSI) is to align and realign IT Services to changing business needs by identifying and implementing improvements to the IT services that support Business Processes.
The perspective of CSI on improvement is the business perspective of service quality, even though CSI aims to improve process effectiveness, efficiency and cost effectiveness of the IT processes through the whole life-cycle.
To manage improvement, CSI should clearly define what should be controlled and measured.
It is also important to understand the differences in between Continuousand Continual
What are the Main Objectives of Continual Service Improvement (CSI)
Continual Service Improvement (CSI) – Approach
Continual Service Improvement (CSI) – 7 Step Process CSI – Measure and improvement process has 7 steps. These steps will help to define the corrective action plan.
Continual Service Improvement (CSI) – Challenges, CSFs & Risks Like all programs, CSI also have its challenges, critical success factors and risks. Some of these are listed below. It is absolutely important that to implement the CSI program that we need the senior management’s buy-in.
Please remember transforming IT is a Process/Journey, not an Event. Hope these are useful.