The Present [and Future] Landscape of Data Migrations

A rite of passage for the majority of us in the tech consultancy world is being a part of a medium to large scale data migration at some stage in our careers. No, I don’t mean dragging files from a PC to a USB drive, though this may have very well factored into the equation for some us. What I’m referencing is a planned piece of work where the objective is to move an entire data set from a legacy storage system to a target storage. Presumably, a portion of this data is actively used, so this migration usually occurs during a planned downtime period, ad communication strategy, staging, resourcing, etc.
Yes, a lot of us can say ‘been there, done that’. And for some us, it can seem simple when broken down as above. But what does it mean for the end user? The recurring cycle of change is never an easy one, and the impact of a data migration is often a big change. For the team delivering it can be just as stress-inducing – sleepless shift cycles, outside of hours and late-night calls, project scope creeping (note: avoid being vague in work requests, especially when it comes to data migration work), are just a few of the issues that will shave years off anyone who’s unprepared for what a data migration encompasses.
Back to the end-users, it’s a big change: new applications, new front-end interfaces, new operating procedures and a potential shake-up of business processes, and so on. Most opt and agree with the client to taper off the pain of the transition/change period, ‘rip the Band-Aid right off’ and move an entire dataset from one system to another in one operation. Sometimes, and dependent on context/platforms, this is a completely seamless exercise. The end user logs in on a Monday and they’re mostly unaware of a switch. Whether taking this, or a phased approach to the migration, there are signs showing in today’s technology services landscape that these operations are aging and become somewhat outdated.
Data Volumes Are Climbing…
… to put it mildly. We’re in a world of Big Data, and this isn’t only for Global Enterprises and Large Companies, but even mid-sized ones and for some individuals too. Weekend downtimes aren’t going to be enough – or aren’t, as this BA discovered on a recent assignment – and when your data amounts aren’t equitable to the actual end users you’re transitioning (the bigger goal is, in my mind, the transformation of the user experience in fact), then you’re left with finite amounts of time to actually perform tests, gain user acceptance, plan and strategise for mitigation and potential rollback.
Migration through Cloud Platforms are not yet well-optimized for effective (pain-free) Migrations
Imagine you have a billing system that contains somewhere up to 100 million fixed assets (active and backlog). The requirement is to migrate these all to a new system that is more intuitive to the accountants of your business. On top of this, the app has a built-in API that supports 500 asset migrations a second. Not bad, the migration will, therefore, take just under 20 days to complete. Not optimal for a project, no matter how much planning goes into the delivery phase. On top of this, consider the slowing down of performance due to user access going through an API or load gateway. Not fun.
What’s the Alternative?
In a world where we’re looking to make technology and solution delivery faster and more efficient, the future of data migration may, in fact, be headed in the opposite direction.
Rather than phasing your migrations over outage windows of days or weeks, or from weekend-to-weekend, why not stretch this out to months even?
Now, before anyone cries ‘exorbitant bill-ables’, I’m not suggesting that the migration project itself be drawn out for an overly long period of time (months, a year).
No, the idea is not to keep a project team around for unforeseen, yet to-be-expected challenges that face them as previously mentioned above. Rather, as tech and business consultants and experts, a possible alternative is redirecting our efforts towards our quality of service, to focus on change management aspect with regards to end-user adoption of a new platform and associated process, and the capability of a given company’s managed IT serviced too, not only support the change but in fact incorporate the migration into as a standard service offering.
The Bright(er) Future for Data Migrations
How can managed services support a data migration, without specialisation in, say, PowerShell scripting or experience in performing a migration via a tool or otherwise, before? Nowadays we are fortunate enough that vendors are developing migration tools to be highly user-friendly and purposed for ongoing enterprise use. They are doing this to shift the view that a relationship with a solution provider for projects such as this should simply be a one-off, and that the focus on migration software capability is more important than the capability of the resource performing the migration (still important, but ‘technical skills’ in this space becoming more of a level playing field).
From a business consultancy angle, an opportunity to provide an improved quality of service is presented by looking at ways in which we can utilise our engagement and discovery skills to bridge the gaps which can often be prevalent between managed services and an understanding of the businesses everyday processes. A lot of this will hinge on the very data being migrated. This can onset positive action from a business given time and with full support from managed services. Data migrations as a BAU activity can become iterative and via request; active and relevant data first, followed potentially by a ‘house-cleaning’ activity where the business effectively de clutters data which it no longer needs or is no longer relevant.
It’s early days and we’re likely still toeing the line between old data migration methodology and exploring what could be. But ultimately, enabling a client or company to be more technologically capable, starting with data migrations, is definitely worth a cent or two.

IaaS – Application Migration Management Tracker

What is IaaS Application Migration

Application migration is the process of moving an application program or set of applications from one environment to another. This includes migration from an on-premises enterprise server to a cloud provider’s environment or from one cloud environment to another. In this example, Infrastructure as a Service (IaaS) application migration.

Application Migration Management Tracker

Having a visual IaaS application migration tracker, helps to clearly identify all dependencies and blockers to manage your end to end migration tracking. In addition to the project plan, this artefact will help to manage daily stand-ups and accurate weekly status reporting.

Benefits

  • Clear visibility of current status
  • Ownerships/accountability
  • Assist escalation
  • Clear overall status
  • Lead time to CAB and preparation times
  • Allows time to agree and test key firewall/network configurations
  • Assist go/no-go decisions
  • Cutover communications
  • All dependencies
  • Warranty period tracking
  • BAU sign-off
  • Decommission of old systems if required

When to use and why?

  • Daily stand-ups
  • Go-no go meetings to take clear next steps and accountability
  • Risks and issues preparation and mitigation steps
  • During change advisory board (CAB) meeting to provide accurate report to obtain approval to implement
  • Traceability to tick and progress BAU activities and preparation of operational support activities

Application Migration Approach

Apps migration.jpg

Example of IaaS Application Migration Tracker

Below is an example which may assist your application migration tracking in detail.

  • Application list
  • Quarterly timelines
  • Clear ownerships
  • Migration tracking sub tasks
  • Warranty tracking sub tasks
  • Current status
  • Final status

IaaS - Application Migration Tracker - Example

IaaS Application Migration Tracker

Summary

Hope this example helps but it can be customised as per organisational processes and priorities. This tracker can also be used on non-complex applications and complex application migration. Thanks.

Proactive Problem Management – Benefits and Considerations

IT Service Management – Proactive Problem Management

The goal of Proactive Problem Management is to prevent Incidents by identifying weaknesses in the IT infrastructure and applications, before any issues have been instigated.

Benefits

PPM.jpg

  • Greater system stability – This leads to increased user satisfaction.
  • Increased user productivity – This adds to a sizable productivity gain across the enterprise.
  • Positive customer feedback – When we proactively approach users who have been experiencing issues and offer to fix their problems the feedback will be positive.
  • Improved security – When we reduce security incidents, this leads to increased enterprise security.
  • To improve quality of software/product – The data we collect will be used to improve the quality.
  • To Reduce volume of problems – Lower the ratio of immediate (Reactive) support efforts against planned support efforts in overall Problem Management process.

Considerations

  • Proactive Problem Management can be made easier by the use of a Network Monitoring System.
  • Proactive Problem Management is also involved with getting information out to your customers to allow them to solve issues without the need to log an Incident with the Service Desk.
    • This would be achieved by the establishment of a searchable Knowledgebase of resolved Incidents, available to your customers over the intranet or internet, or the provision of a useable Frequently Asked Question page that is easily accessible from the home page of the Intranet, or emailed regularly.
  • Many organisations are performing Reactive Problem Management; very few are successfully undertaking the proactive part of the process simply because of the difficulties involved in implementation.
    • Proactive Problem Management to Business Value
    • Cost involved with Proactive vs. Reactive Problem Management
    • Establishment of other ITIL processes such as configuration Management, Availability Management and Capacity Management.

 

Proactive Problem Management – FAQ

Q – At what stage of our ITIL process implementation should we look at Implementing Proactive Problem Management?

  • A – Proactive Problem Management cannot be contemplated until you have Configuration Management, Availability Management and Capacity Management well established as the outputs of these processes will give you the information that is required to pinpoint weaknesses in the IT infrastructure that may cause future Incidents.

Q – How can we performance measure and manage?

  • A – Moving from reactive to proactive maintenance management requires time, money, human resources, as well as initial and continued support from management. Before improving a process, it is necessary to define the improvement. That definition will lead to the identification of a measurement, or metric. Instead of intuitive expectations of benefits, tangible and objective performance facts are needed. Therefore, the selection of appropriate metrics is an essential starting point for process improvement.

 

Proactive Problem Management – High Level Process Diagram

PPMP1.jpg

Summary

Implementing proactive problem management will require an agreed uniform approach specially when multiple managed service providers (MSPs) involved with an organisation. Hope you found this useful.

Azure AD Domain Services

I recently had what I thought was a rather unique requirement from a customer.
The requirement was to build Azure IaaS virtual machines and have them joined to a managed domain, while also being able to authenticate to the virtual machines using Azure AD credentials.
The answer is Azure AD Domain Services!
Azure AD Domain Services provides managed domain services such as domain join, group policy and Kerberos/NTLM authentication without the need for you to deploy and  manage domain controllers in the cloud. For more information see https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-overview
It is not without its limitations though, main things to call out is that configuring domain trusts and applying schema extensions is not possible with Azure AD Domain Services. For a full list of limitations see: https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-comparison
Unfortunately at this point in time you cannot use ARM templates to configure Azure AD Domain Services so you are limited to the Azure Portal or PowerShell. I am not going to bore you with the details of the deployment steps as it is quite simple and you can easily follow the steps supplied in the Microsoft documentation: https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-enable-using-powershell
What I would like to do is point out the following learnings that I discovered during my deployment.

  1. In order to utilise Azure AD credentials that are synchronised from on-premises, synchronisation of NTLM/Kerberos credential hashes must be enabled in Azure AD Connect, this is not enabled by default.
  2. If there is any cloud-only user accounts, all users who need to use Azure AD Domain Services must change their passwords after Azure AD Domain Services is provisioned. The password change process causes the credential hashes for Kerberos and NTLM authentication to be generated in Azure AD.
  3. Once a cloud-only user account has changed their password, you will need to wait for a minimum of 20 minutes before you will be able to use Azure AD Domain Services (this got me as I was impatient).
  4. Speaking of patience the provisioning process of Azure Domain Services takes about an hour.
  5. Have a dedicated subnet for Azure AD Domain services to avoid any connectivity issues that may occur with NSGs/firewalls.
  6. You can only have one managed domain connected to your Azure Active Directory.

That’s it, hopefully this helped you get a better understanding of Azure AD Domain Services and assists with a smooth deployment.

Google Cloud Platform: an entrée

The recent opening of a Google Cloud Platform region in Sydney about 2 months ago triggered my interest in learning more about the platform and understand how their offering would affect the local market moving forward.
So far, I have concentrated mainly on GCPs IaaS offering by digging information out of videos, documentation and venturing through the portal and Cloud Shell. I would like to share my first findings and highlight a few features that, in my opinion, make it worth having a closer look.

Virtual Networks are global

Virtual Private Clouds (VPC) are global by default; this means that workloads in any GCP region can be one trace-route hop away from each other in the same private network. Firewall rules can also be applied in a global scope, simplifying preparation activities for regional failover.
Global HTTP Load Balancing is another feature that allows a single entry-point address to direct traffic to the most appropriate backend around the world. This comes as a very interesting advantage over a DNS based solutions because Global Load Balancing can react instantaneously.

Subnets and Availability Zones are independent 

Google Cloud Platform subnets cover an entire region. Regions still have multiple Availability Zones but they are not directly bound to a particular subnet. This comes in handy when we want to move a Virtual Machine across AZs but keep the same IP address.
Subnets also enable turning on/off Private Google API access with simple switch. Private access allows Virtual Machines without Internet access to reach Google APIs and Services using their internal IPs.

Live Migration across Availability Zones

GCP supports Live Migration within a region. This feature maintains machines up and running during events like infrastructure maintenance, host and security upgrades, failed hardware, etc.
A very nice addition to this feature is the ability to migrate a Virtual Machine into a different AZ with a single command:

$ gcloud compute instances move example-instance  \
  --zone <ZONEA> --destination-zone <ZONEB>

Notice the internal IP is preserved:

The Snapshot service is also global

Moving instances across regions is not as straight forward as moving them within Availability Zones. However, since Compute Engine’s Snapshot service is global, the process is still quite simple.

  1. I create a Snapshot from the VM instance’s disk.
  2. I crate a new Disk from the Snapshot but I place it in the target region’s AZ I want to move the VM to.
  3. Then I can create a new VM using the Disk.

An interesting consequence of Snapshots being global is that it allows us to use them as a data transfer alternative between regions that results in no ingress-egress charges.

You can attach VMs to multiple VPCs

Although still in beta, GCP allows us to attach multiple NICs to a machine and have each interface connect to a different VPCs.
Aside from the usual security benefits of perimeter and DMZ isolation, this feature gives us the ability to share third-party appliances across different projects: for example having all Internet ingress and egress traffic inspected and filtered by a common custom firewall box in the account.

Cloud Shell comes with batteries included

Cloud Shell is just awesome. Apart from its outgoing connections restricted to 20, 21, 22, 80, 443, 2375, 2376, 3306, 8080, 9600, and 50051, it is such a handy tool that you can use to quickly put together PoCs.

  • You get your own Debian VM with tmux multi tab support.
  • Docker up and running to build and test containers.
  • Full apt-get capabilities.
  • You can upload files into it directly from your desktop.
  • A brand new integrated code editor if you don’y like using vim, nano and so on.
  • Lastly, it has a web preview feature allowing you to run your own web server on ports 8080 to 8084 to test your PoC from the internet.

SSH is managed for you

GCP SSH key management is one of my favourite features so far. SSH key pairs are created and managed for you whenever you connect to an instance from the browser or with the gcloud command-line tool. User access to is controlled by Identity and Access Management (IAM) roles having CGP create and apply short lived SSH key pairs on the fly when necessary.

Custom instances, custom pricing

Although a custom machine type can be viewed as something that covers a very niche use case, it can in fact help us price the right instance RAM and CPU for the job at hand. Having said this, we also get the option to buy plenty of RAM and CPU that we will never need (see below).

 – Discounts, discounts and more discounts

I wouldn’t put my head in the lion’s mouth about pricing at this time but there are a large number of Cloud cost analysis reports that categorise GPC as being cheaper than the competition. Having said this, I still believe it comes down to having the right implementation and setup: you might not manage the infrastructure directly in the Cloud but you should definitely manage your costs.
GCP offers sustained-use discounts for instances that have been run over a percent of the overall billing month (25%, 50%, 75% and 100%) and it also recently released 1 and 3 year committed-use discounts which can reach up to 57% of the original instance price. Finally, Preemptible instances (similar to AWS spot instances) can reach up to 80% discount from list price.
Another very nice feature to help managing cost is their Compute sizing recommendations. These recommendations are generated based on system metrics and can help identifying workloads that can be resized to have a more appropriate use of resources.

Interesting times ahead

Google has been making big progress with its platform in the last two years. According to some analyses it still has to cover some ground to reach its competitors level but as we just saw GCP is coming with some very interesting cards under its sleeve.
One thing is for sure… interesting times lie ahead.

Happy window shopping!

 

Decommissioning Exchange 2016 Server

I have created many labs over the years and never really spent the time to decommission my environment, I usually just blow it away and start again.
So I finally decided to go through the process and decommission my Exchange 2016 server in my lab environment.
My lab consisted of the following:

  • Domain Controller (Windows Server 2012 R2)
  • AAD Connect Server
  • Exchange 2016 Server/ Office 365 Hybrid
  • Office 365 tenant

Being a lab I only had one Exchange server which had the mailbox role configured and was also my hybrid server.
Proceed with the steps below to remove the Exchange Online configuration;

  1. Connect to Exchange Online using PowerShell (https://technet.microsoft.com/en-us/library/jj984289(v=exchg.160).aspx)
  2. Remove the Inbound connector
    Remove-InboundConnector -Identity "Inbound from [Inbound Connector ID]"
  3. Remove the Outbound connector
    Remove-OutboundConnector -Identity "Outbound to [Outbound Connector ID]"
  4. Remove the Office 365 federation
    Remove-OrganizationRelationship "O365 to On-premises - [Org Relationship ID]"
  5. Remove the OAuth configuration
    Remove-IntraOrganizationConnector -Identity "HybridIOC - [Intra Org Connector ID]"

Now connect to your Exchange 2016 server and conduct the following;

  1. Open Exchange Management Shell
  2. Remove the Hybrid Config
    Remove-HybridConfig
  3. Remove the federation configuration
    Remove-OrganizationRelationship "On-premises to O365 - [Org Relationship ID]"
  4. Remove OAuth configuration
    Remove-IntraorganizationConnector -Identity "HybridIOC - [Intra Org Connector ID]"
  5. Remove all mailboxes, mailbox databases, public folders etc.
  6. Uninstall Exchange (either via PowerShell or programs and features)
  7. Decommission server

I also confirmed Exchange was removed from Active Directory and I was able to run another installation of Exchange on the same Active Directory with no conflicts or issues.

Akamai Cloud based DNS Black Magic

Let us start with traditional DNS hosting with any DNS hoster or ISP. How does traditional DNS name resolution works? When you type a human readable name www.anydomain.com on the address bar of internet explorer, that name is resolved to an Internet Protocol (IP) address hosted by an Internet Service Provider (ISP). The browser presented the website to the user. By doing so, the website exposed the public IP address to everyone. The good and bad guys know the IP address and can trace globally. A state sponsored hackers or private individual can launch a denial of service attack also known as DDoS on a website whose publicly IP address is known and traceable. The bad guys can send overwhelming number of fake service request to the original IP address of the human readable name www.anydomain.com and shut the website. In this situation, DNS server hosting DNS record www.anydomain.com will stop serving genuine DNS request resulting distributed denial of service (DDoS).
Akamai introduced Fast DNS that is dynamic DNS located almost every country, state, territory and regions to mitigate such risk of DDoS and DNS hijack.
Akamai Fast DNS offloads domain name resolution from on-premises infrastructure and traditional domain name provider to an intelligent, secure and authoritative DNS service.  Akamai has successfully prevented DDoS attack, DNS forgery and manipulation by complex dynamic DNS hosting and spoof IP addresses.
As of today Akamai has more than 150000+ servers located in more than 2000+ locations in the world that are very well connected in 1200+ networks in 700+ cities in 92 countries and in most cases, an Akamai edge server is just a hop away from the end users.
How does it works?

  1. User request www.anydomain.com
  2. User’s ISP respond the DNS name query www.anydomain.com
  3. User’s ISP resolves www.anydomain.com DNS Name to www.anydomain.com.edgekey.net hosted by Akamai
  4. Akamai Global DNS checks the CNAME www.anydomain.com.edgekey.net and the region of the request coming from
  5. Akamai Global then forward the request to the Akamai regional DNS Server for example Sydney, Australia
  6. Akamai regional DNS server forward the request to nearest Akamai edge server of the user location for example Melbourne, Australia
  7. Akamai local DNS server for example Melbourne, Australia resolve the original CNAME www.anydomain.com to www.anydomain.com.edgekey.net
  8. www.anydomain.com.edgekey.net resolve to cached (if cached) website www.anydomain.com by Akamai which then presented to user’s browser

Since Akamai uses dynamic DNS server, it is extremely difficult for a bad guy to track down the real IP address of the website and origin host of the website. In Akamai terminology, .au or .uk means that the website is hosted in that country (au or uk) but the response of the website is coming to the user from his/her geolocation hence IP address of the website will always be presented from the Akamai edge server of the user’s geolocation. In plain English, origin host and IP address is vanished in the complex dynamic DNS servers of Akamai. For example,

  1. www.anydomain.com.edgekey.net resolve to a spoof IP address hosted by Akamai DNS server
  2. The original IP address of www.anydomain.com is never resolved by Akamai DNS server or the ISP hosting the www.anydomain.com

Implementing Akamai Fast DNS:

  1. Create a Host A record in your ISP www.anydomain.com and point to 201.17.xx.xx public IP (VIP of Azure Web Services or any web services)
  2. Create an origin host record or CNAME record www.anydomain.com and point to xyz9013452bcf.anaydomain.com
  3. Now request Akamai to black magic www.anydomain.com and point to www.anydomain.com.edgekey.net
  4. Once Akamai completes the black magic, request your ISP to create another CNAME record xyz9013452bcf.anydomain.com and point to www.anydomain.com.edgekey.net

Testing Akamai Fast DNS: I am using www.akamai.com as the DNS name instead of a real DNS of record of any of my client.
Go to mxtoolbox.com and DNS lookup, www.akamai.com you will see
CNAME www.akamai.com  resolve to www.akamai.com.edgekey.net
Open command Prompt and ping www.akamai.com.edgekey.net
Since I am pinging from Sydney Australia, my ping responded by the Akamai edge server Sydney, result is
Ping www.akamai.com.edgekey.net
Pinging e1699.dscc.akamaiedge.net [118.215.118.16] with 32 bytes of data:
Reply from 118.215.118.16: bytes=32 time=6ms TTL=56
Reply from 118.215.118.16: bytes=32 time=3ms TTL=56
Open a browser and go to http://www.kloth.net/services/dig.php and trace e1699.dscc.akamaiedge.net
; <<>> DiG 9 <<>> @localhost e1699.dscc.akamaiedge.net A
; (1 server found)
;; global options: +cmd
.                                            375598   IN            NS           d.root-servers.net.
.                                            375598   IN            NS           c.root-servers.net.
.                                            375598   IN            NS           i.root-servers.net.
.                                            375598   IN            NS           j.root-servers.net.
.                                            375598   IN            NS           k.root-servers.net.
.                                            375598   IN            NS           m.root-servers.net.
.                                            375598   IN            NS           a.root-servers.net.
.                                            375598   IN            NS           l.root-servers.net.
.                                            375598   IN            NS           e.root-servers.net.
.                                            375598   IN            NS           f.root-servers.net.
.                                            375598   IN            NS           b.root-servers.net.
.                                            375598   IN            NS           g.root-servers.net.
.                                            375598   IN            NS           h.root-servers.net.
;; Received 228 bytes from 127.0.0.1#53(127.0.0.1) in 3 ms
net.                                       172800   IN            NS           a.gtld-servers.net.
net.                                       172800   IN            NS           b.gtld-servers.net.
net.                                       172800   IN            NS           c.gtld-servers.net.
net.                                       172800   IN            NS           d.gtld-servers.net.
net.                                       172800   IN            NS           e.gtld-servers.net.
net.                                       172800   IN            NS           f.gtld-servers.net.
net.                                       172800   IN            NS           g.gtld-servers.net.
net.                                       172800   IN            NS           h.gtld-servers.net.
net.                                       172800   IN            NS           i.gtld-servers.net.
net.                                       172800   IN            NS           j.gtld-servers.net.
net.                                       172800   IN            NS           k.gtld-servers.net.
net.                                       172800   IN            NS           l.gtld-servers.net.
net.                                       172800   IN            NS           m.gtld-servers.net.
;; Received 512 bytes from 2001:7fd::1#53(2001:7fd::1) in 8 ms
akamaiedge.net.                  172800   IN            NS           la1.akamaiedge.net.
akamaiedge.net.                  172800   IN            NS           la3.akamaiedge.net.
akamaiedge.net.                  172800   IN            NS           lar2.akamaiedge.net.
akamaiedge.net.                  172800   IN            NS           ns3-194.akamaiedge.net.
akamaiedge.net.                  172800   IN            NS           ns6-194.akamaiedge.net.
akamaiedge.net.                  172800   IN            NS           ns7-194.akamaiedge.net.
akamaiedge.net.                  172800   IN            NS           ns5-194.akamaiedge.net.
akamaiedge.net.                  172800   IN            NS           a12-192.akamaiedge.net.
akamaiedge.net.                  172800   IN            NS           a28-192.akamaiedge.net.
akamaiedge.net.                  172800   IN            NS           a6-192.akamaiedge.net.
akamaiedge.net.                  172800   IN            NS           a1-192.akamaiedge.net.
akamaiedge.net.                  172800   IN            NS           a13-192.akamaiedge.net.
akamaiedge.net.                  172800   IN            NS           a11-192.akamaiedge.net.
;; Received 504 bytes from 2001:503:a83e::2:30#53(2001:503:a83e::2:30) in 14 ms
dscc.akamaiedge.net.          8000       IN            NS           n7dscc.akamaiedge.net.
dscc.akamaiedge.net.          4000       IN            NS           n0dscc.akamaiedge.net.
dscc.akamaiedge.net.          6000       IN            NS           a0dscc.akamaiedge.net.
dscc.akamaiedge.net.          6000       IN            NS           n3dscc.akamaiedge.net.
dscc.akamaiedge.net.          4000       IN            NS           n2dscc.akamaiedge.net.
dscc.akamaiedge.net.          6000       IN            NS           n6dscc.akamaiedge.net.
dscc.akamaiedge.net.          4000       IN            NS           n5dscc.akamaiedge.net.
dscc.akamaiedge.net.          8000       IN            NS           n1dscc.akamaiedge.net.
dscc.akamaiedge.net.          8000       IN            NS           n4dscc.akamaiedge.net.
;; Received 388 bytes from 184.85.248.194#53(184.85.248.194) in 8 ms
e1699.dscc.akamaiedge.net. 20        IN            A             23.74.181.249
;; Received 59 bytes from 77.67.97.229#53(77.67.97.229) in 5 ms
Now tracert 23.74.181.249 on a command prompt
Tracert 23.74.181.249
Tracing route to a23-74-181-249.deploy.static.akamaitechnologies.com [23.74.181.249]
over a maximum of 30 hops:
1     1 ms     1 ms     1 ms  172.28.67.2
2     4 ms     1 ms     4 ms  172.28.2.10
3     *        *        *     Request timed out.
4     *        *        *     Request timed out.
5     *        *        *     Request timed out.
6                     *     Request timed out.
7     *        *        *     Request timed out.
8     *      125 ms    75 ms  bundle-ether1.sydp-core04.sydney.reach.com [203.50.13.90]
9   172 ms   160 ms   165 ms  i-52.tlot-core02.bx.telstraglobal.net [202.84.137.101]
10   152 ms   192 ms   164 ms  i-0-7-0-11.tlot-core01.bi.telstraglobal.net [202.84.251.233]
11   163 ms   183 ms   176 ms  gtt-peer.tlot02.pr.telstraglobal.net [134.159.63.182]
12   151 ms   157 ms   155 ms  xe-2-2-0.cr2-lax2.ip4.gtt.net [89.149.129.234]
13   175 ms   160 ms   154 ms  as5580-gw.cr2-lax2.ip4.gtt.net [173.205.59.18]
14   328 ms   318 ms   317 ms  ae21.edge02.fra06.de.as5580.net [78.152.53.219]
15   324 ms   325 ms   319 ms  78.152.48.250
16   336 ms   336 ms   339 ms  a23-74-181-249.deploy.static.akamaitechnologies.com [23.74.181.249]
Now open hosts file of windows machine C:\WINDOWS\system32\drivers\etc\hosts and add Akamai spoof IP 172.233.15.98   www.akamai.com (reference)
Browse www.akamai.com website on internet explorer that will point you to 172.233.15.98
Open command prompt, nslookup 172.233.15.98
Server:  lon-resolver.telstra.net
Address:  203.50.2.71
Name:    a172-233-15-98.deploy.static.akamaitechnologies.com
Address:  172.233.15.98
In conclusion, Akamai tricked web browser to go to Akamai edge server Sydney Australia instead of original Akamai server hosted in USA. An user will never know the original IP address of the www.akamai.com website. Abracadabra the original IP address is vanished…

Exchange Server 2016 in Azure

I recently worked on a project where I had to install Exchange Server 2016 on an Azure VM and I chose a D2 sized Azure VM (2 cores, 7GB RAM) thinking that will suffice, well that was a big mistake.
The installation made it to the last step before a warning appeared informing me that the server is low on memory resources and eventually terminated the installation, leaving it incomplete.
Let this be a warning to the rest of you, choose a D3 or above sized Azure VM to save yourself a whole lot of agony.
To try and salvage the Exchange install I attempted to re-run the installation as it detects an incomplete installation and tries to pick up where it failed previously, this did not work.
I then tried to uninstall Exchange completely by running command: “Setup.exe /mode:Uninstall /IAcceptExchangeServerLicenseTerms”. This also did not work as it was trying to uninstall an Exchange role that never got installed, this left me one option manually remove Exchange from Active Directory and rebuild the Azure VM. 
To remove the Exchange organisation from Active Directory I had to complete the following steps;

  1. On a Domain Controller | Open ADSI Edit
  2. Connect to the Configuration naming contextconfig-naming-context
  3. Expand Services
  4. Delete CN=Microsoft Exchange and CN=Microsoft Exchange Autodiscoverconfig-exchange-objects
  5. Connect to the Default naming contextdefault-naming-context
  6. Under the root OU delete OU=Microsoft Exchange Security Groups and CN=Microsoft Exchange System Objects delete-exchange-objects
  7. Open Active Directory Users and Computers
  8. Select the Users OU
  9. Delete the following:
    • DiscoverySearchMailbox{GUID}
    • Exchange Online-ApplicationAccount
    • Migration.GUID
    • SystemMailbox{GUID}ad-exchange-objects

After Exchange was completely removed from Active Directory and my Azure VM was rebuilt with a D3 size I could successfully install Exchange Server 2016.

Exchange Server 2016 install error: “Active Directory could not be contacted”

I recently worked on a project where I had to install Exchange Server 2016 on an Azure VM and received error “Active Directory could not be contacted”.
To resolve the issue, I had to complete the following steps;

  1. Remove the Azure VM public IP address
  2. Disable IPv6 on the NICipv6-disabled
  3. Set the IPv4 DNS suffix to point to your domain. If a public address is being used it will be set to reddog.microsoft.com by default.dns-suffix

Once done the installation could proceed and Active Directory was contactable.

Break down your templates with Linked Templates (Part 1)

Templated deployment is one of the key value propositions of moving from the Azure classic to Resource Manager (ARM) deployment model.  This is probably one key feature that made a big stride towards Infrastructure as a Code (IAC).  Personally, I have been looking forward to this feature since it’s a prominent feature on the other competing platform.

Now that this feature is live for a while, one aspect which I found interesting is the ability to link templates in Azure Resource Manager.  This post is part of a three-part series highlighting ways you could deploy linked templates.  The first part of the series describes the basics – building a template that creates the base environment.  The second part will demonstrate the linked template.  Third part will delve into a more advanced scenario with KeyVault and how we can secure our linked templates.

Why linked templates?  We could get away with one template to build our environment, but is it a good idea?  If your environment is pretty small, sure go ahead.  If the template becomes un-manageable for you which is mostly the case if you are serious about ‘templating’, then you come to the right place – linking or nesting templates is a way for you to de-couple your templates to manageable chunks.  This allows you to ‘branch’ out templates development, especially when you have more than one person; resulting in smaller, manageable templates than a one big template.  Plus, this would make testing / debugging a little easier as you have a smaller scope to work off.

Linking or nesting templates is a way for you to de-couple your templates to manageable chunks.  This allows you to ‘branch’ out templates development, especially when you have more than one person; resulting in smaller, manageable templates than a one big template.  Plus, this would make testing / debugging a little easier as you have a smaller scope to work off.

OK, so let’s start with the scenario that demonstrates linked templates.  We will build a two-tier app consisting of two web servers and a database server.  This includes placing the two web servers in an availability set with a virtual network (VNET) and storage account to host the VMs.  To decouple the components, we will have three templates, one template that defines the base environment (VNET, subnets, and storage account).  The second template will build the virtual machines (web servers) in the front-end subnet, and finally the 3rd template will build the database server.

AzTemplate-1

First template (this blog post)

AzTemplate-2

Second template

AzTemplate-3

Third template

AzTemplate-4

The building process

First, create a resource group, this example relies on a resource group for deployment.  You will need to create the resource group now or later during deployment.

Once you have a resource group created, set up your environment – SDK, Visual Studio, etc. You can find more info here.  Visual Studio is optional but this demo will use this to build & deploy the templates.

Create a new project and select Azure Resource Group.  Select Blank templateAzTemplate-5

This will create a project with PowerShell deployment script and two JSON files: azuredeploy and azuredeploy.parameters.  The PowerShell script is particularly interesting as it has everything you need to deploy the Azure templates.

AzTemplate-6

We will start by defining the variables in the azuredeploy template.   Note this can be parameterised for a better portability.

Then we define the resources – storage account and virtual network (including subnets).

We then deploy it – in Visual Studio (VS) the process is simply by right-clicking project and select New Deployment… then Deploy.  If you didn’t create a resource group, you can create it here also.

AzTemplate-7

The template will deploy and a similar message should be displayed once it’s finished.

Successfully deployed template ‘w:\github-repo\linkedtemplatesdemo\linkedtemplatesdemo\templates\azuredeploy-base.json‘ to resource group ‘LinkedTemplatesDemo‘.

On the Azure portal you should see the storage account and virtual network created.

AzTemplate-8AzTemplate-9

The next part in the series describes how we modify this template to call the web and db servers templates (linked templates).  Stay tuned :).

 

 

 

Follow ...+

Kloud Blog - Follow