logo_v2
866.550.7881 ext. 1
Executive Image

Research

Welcome to the Windsor Group knowledge base of articles
and publications that demonstrate our understanding of many
of the complex business challenges and key issues faced by companies
around the world
Windsor Blog Post Image

Quantum computing used to be the stuff of science fiction, but this technology seems to now be hurtling toward science reality. With the waves that cloud computing created in the business world, everyone wants to stay ahead on the latest and greatest tech. But sometimes that means understanding not only the technology’s potential but also the difficulties it faces before getting to market.

What’s the big deal about quantum computing?

The biggest deal about quantum computing is that it’s even remotely possible. The fundamentals of quantum computing rely on advanced and complex physics still in its early stages of research. Even Richard Feynman, who won the Nobel Prize for his work in quantum theory, once said, “Nobody understands quantum mechanics.”

While the actual math and science behind quantum computing is still fuzzy and full of kinks, both public and private researchers are getting a clearer idea of the “how” of quantum physics (just not exactly the “why”). We are surrounded by — and made up of — atoms and their subatomic components, electrons, protons, and neutrons. Instead of using a voltage circuit to make up the digital bits that allow computers to perform calculations, quantum computers rely on subatomic particles to make up quantum bits, or qubits.

These particles are generated using superconducting circuits or levitating individual atoms inside electromagnetic fields. It’s work on an incredibly small scale but, when treated correctly, these qubits can reach this not-well-understood mode called a “superposition.”

Instead of creating 1 and 0 bits through the variable application of a current, a single qubit in superposition can be a 1 and a 0 at the same time. While this might seem counterintuitive at first, a series of qubits in superposition can blaze through time-consuming problems about 5,000 times faster than a modern computer. Yes, you read that right: 5,000 times faster.

But quantum computing isn’t just about speed. It’s also about security. QuTech, a European company, is looking into applying quantum entanglement to create instantaneous and secure data transfer. Entanglement is a whole other beast of complicated physics, but essentially two entangled qubits retain a connection no matter the distance or barriers between them. It’s been likened to the teleportation of data, since affecting one particle will instantly change the other particle too. Better yet, no one can hack the transfer of information — only disrupt it.

The moon race of our time

Governments and private entities alike are chasing down the economic potential of a quantum computer and data transferring capabilities. Google, IBM, and Microsoft are heavily investing in their own development projects, and venture investors heaped $241 million on quantum computing startups in 2017. China is building a $10 billion research facility for quantum research, and the European Union has its own billion-dollar research program.

Whoever figures out quantum computing first stands to gain the most benefit from the technology — from more increased processing power for solving other computing quandaries, such as theoretically impossible-to-break security protocols, to faster resolution of simulations for new medications and drug therapy.

Between the proposed speed and security, quantum might seem like a natural progression for computing. But even while companies such as Microsoft are projecting a five-year launch for quantum computing in Azure and QuTech is racing to a 2020 deadline for a four-city quantum computing network, researchers still have a significant number of bugs to work out of qubits.

Not so fast ... yet

And now we come to the catch. Or several catches. While theory suggests that the high-powered possibilities of quantum computing are well worth the effort, we’re still a ways off from actually bringing quantum computing to any broad market — if it deserves a place there at all.

One of the main reasons for this is that qubits are highly unstable. Shielding is a must for these computers, as are close to absolute-zero operating temperatures — which is lower than minus 450 degrees Fahrenheit. Processing power is also reduced to correct qubit misfiring errors. Although researchers have made progress in stabilizing qubits in large groups, it’s still an incredibly expensive and finicky endeavor.

And while quantum theory has had at least a couple of decades of academic study, actual proof of principles in labs is rare, especially on a large scale. We also have no idea what application design for a quantum computer would even look like. Plus, the field is facing a shortage of qualified physicists and engineers able to tackle these tough problems.

We’re not saying that we should lose the enthusiasm for quantum computing. It’s a matter of managing expectations of quantum computing, especially in terms of how it can affect your business. For now, it might be a better idea to keep your IT plans focused on cloud computing rather than getting too far ahead of the curve.

The Windsor Group strives to keep up with the latest technology trends so that we can fully discuss and advise enterprises on the options that fit their goals. Learn more about our team.

Read More
Windsor Blog Post Image

Between more frequent fires, hurricanes, and other extreme weather events, ensuring that your company has an up-to-date disaster recovery plan is more important than ever. In addition, if you still manage a large number of business-critical systems from an internally managed data center, you might want to consider cloud services as another line of protection against hardware failures and natural disasters.

Whether the weather is dry or wet

With California moving into a year-round fire season and coastal areas like Florida, Texas, and the Carolinas facing hurricanes with unprecedented wind and flood damage, businesses of all sizes are evaluating their disaster recovery plans against the reality of a more unpredictable environment.

In the United States, there are no reporting requirements for states on the costs to repair after a major disaster — although we know that federal funding for natural disasters was nearly $140 billion in 2017. As these extreme weather events get more frequent or affect areas with significant business infrastructure, these costs will only continue to rise.

Major data centers tend to be located in areas with low risk of natural disasters, like Nevada and Utah. Facebook’s first data center is in Oregon, where the biggest threat to its integrity is a terrible snowstorm. If your cloud provider is backing up your data in a low-risk area — which is always a good question to ask when you’re looking at providers — at least one copy of your data will be safe even if the primary server is in a more high-risk area.

When we talk disaster recovery, we often talk about services outages or hardware failures. But unintentional, everyday failures and major weather events should be a concern for all businesses when they refresh their disaster recovery plans and protocols.

The cloud is all about automated backups and ease of access

If a major outage or disaster closes your physical office, cloud services accessible from anywhere can help maintain your business continuity no matter what the situation is on the ground. Any loss of local services becomes a simple hardware replacement, rather than the loss of terabytes of data.

But if you’re going to include cloud services as a significant portion of your disaster recovery plan, you need to prepare yourself, your team, and your company for the planning required to develop a good recovery plan. Before you sign any service-level agreement (SLA) with a cloud provider, you need to first understand the provider’s role in the event of a disaster — including protection they have for their own centers and your company’s responsibilities in a disaster situation. This means:

  • Getting your current contracts in order so that you understand your present situation and future expectations
  • Going over disaster preparedness and access options if this isn’t part of your normal plan
  • Reviewing recovery services
  • Discussing regular audit reports with your vendor
  • Including your vendor in your disaster recovery efforts
  • Understanding the vendor’s standard SLA and its references to disaster recovery

Building a relationship with your cloud partner and taking steps to ensure you’re looking for an active partner in disaster recovery can set the stage for success when an extreme event does arise. In addition to following my suggestions above, you should work to establish clear lines of communication on both ends of your vendor-company relationship.

Where the cloud fits into your disaster recovery plan

Once you have a vendor that can accommodate your disaster recovery needs, the next step is making sure your recovery plan is up-to-date and easily accessible. Good disaster recovery planning typically includes:

  • The recovery time objective (RTO) for getting an application back online
  • The recovery point objective (RPO) to define the longest amount of time you can’t access data after a major incident
  • Your specific recovery goals for a variety of situations (data loss, hardware loss, extended absence from a physical location, etc.)
  • Cleanup processes
  • A list of specific tasks to be completed pre- and post-disaster
  • Backup software for installation
  • Configuring your security and your employees’ ability to access the secure disaster recovery system environment as needed
  • Daily or weekly cloud backups, to reduce the loss of work in the event of a disaster

These are just a few components of a successful disaster recovery plan, but they are a good place to start as you work on incorporating cloud services into your business continuity plan.

Another opinion can be incredibly helpful when you’re preparing a disaster recovery plan. The Windsor Group can help you assess your options and find the best solution for your business. Click here to get started with a strategy session.

Read More
Windsor Blog Post Image

Businesses are turning to cloud services to reduce costs, but sometimes this strategy can backfire. This is especially true when you’re making a massive shift from an on-premises IT infrastructure to a partially or entirely cloud-based infrastructure. Cost overruns with the cloud don’t have to be the norm for this transition; proper preparation and migration strategies can prevent these unexpected expenses from dampening your enthusiasm for the cloud.

Movement to cloud infrastructure is inevitable

Non-cloud IT infrastructure spending is losing ground in the battle against cloud infrastructure. Spending on non-cloud infrastructure still makes up more than half of the all IT infrastructure expenditures, but cloud spending has been rapidly gaining ground. In 2016, traditional infrastructure accounted for 62.4% of spending. The next year, it dropped to 57.2% of infrastructure spending.

As cloud chips away at the dominance of traditional infrastructure, CIOs starting their cloud migration journey might be bouncing back and forth between public, private, and hybrid clouds. Many businesses opt for a hybrid option to get the best of all worlds, but it’s good to consider the benefits of public and private clouds before you make a final decision. For example, public clouds come with platforms to boost development and reduce operational workload while private clouds have become popular in industries where security and compliance are major issues.

Cloud migration is often driven by cost. If you’re only paying for what you use — either in seconds or in storage space — you should, theoretically, be paying less.

Runaway costs are a real issue

A study pitting Amazon Web Services (AWS) against an on-premises system found the cost of ownership of the on-premises system to be close to half of what AWS charged, even on a discounted, three-year contract. While the study itself leaves out “hidden” costs such as operational electricity expenses, it’s a great demonstration of how migration to the cloud should be a careful exercise.

CIOs and their C-suite counterparts have to remember that migrating to cloud services can also mean huge system and platform upgrades. In addition, savings can take a while to appear, if they show up at all.

Cost overruns are incredibly common during the migration phase, largely because it’s a new environment and takes some getting used to. The best defense against this risk is to thoroughly research your provider’s cost structure, your migration plans, and the planned cloud resource allocation throughout your company. Once the research is done, everyone who has access to your cloud needs to know when and how much they can use it to stay within budget.

IT departments coming from an on-premises system might not always fully consider the byte-by-byte costs, especially if they’re relying on estimates from your current infrastructure. Assign current costs to your initial infrastructure so you can later compare costs to new cloud infrastructure, as well as expected migration costs. This helps connect the two ideas, especially for team members who might not always pay attention to important emails from the IT department.

Uncovering your current costs and establishing a baseline

Determining the savings and efficacy of a system is difficult if you don’t have thorough initial data. A do-it-yourself infrastructure assessment tool can help set you on the right path, but outside advice can often get to the heart of the issue. Since having good assessment data is an important part of determining how well your cloud migration went, eliminating error by reaching out to a top advisory firm is the best option.

A good consulting firm can help you pinpoint the data you need. However, you can also get started on your own. Assemble IT operational cost data, a list of all hardware and software (including their ages), current needs for the new system, keys skills your staff needs, potential areas for IT performance improvement, your infrastructure governance, a risk management assessment, and a list of corporate goals and implementation benchmarks.

Once this data has been collected, organized, and reviewed by people within your company or in partnership with your business, projections for cloud costs during and after migration should be a lot more accurate. And the more aware you are of the potential costs, the happier you’ll be in the long run.

The experts at Windsor Group can help you uncover the whole story about your current IT infrastructure, as well as provide a strategy session to get you on the road to a smooth cloud migration. Schedule your session today.

Read More
Windsor Blog Post Image

If you’ve tried to hire for an IT position recently, you may have found the task a little more difficult than expected for a sector driving billions of dollars of the economy. The IT talent shortage is nothing new, and the problem won’t be resolved any time soon. Therefore, now is a good time to assess your strategy for attracting and keeping young talent while still leaving growth opportunities for older employees.

Where the IT talent shortage stands

According to the U.S. Bureau of Labor Statistics, more than 1.5 million people were employed in an IT occupation in 2017. This doesn’t include outsourced or offshore positions, so the actual pool of potential IT talent is likely higher. Still, particularly high-profile sectors struggle to fill positions despite plenty of openings and competitive wages — the median wage in this sector is around $89,000 annually.

The rise of automation, artificial intelligence, machine learning in cybersecurity, infrastructure management, and even basic infrastructure organization has increased the demand for people who can develop applications in these complex environments. However, there are fewer than 25,000 “genuine experts” working in artificial intelligence worldwide.

Quantum computing, the new arms race for major tech companies and governments alike, is suffering a major shortage on top of declining enrollment in graduate programs. The extremely experimental nature of quantum computing often requires a master’s degree or a Ph.D. in physics. By some estimates, quantum computing has fewer than 1,000 leading researchers and a huge reliance on graduate students. This is forcing companies into the onerous process on international hires in the midst of decreased visa availability.

These niche, cutting-edge fields pale in comparison to the cybersecurity talent shortage. Estimates vary, but experts predict the field will have between 1.5 million and 3.5 million unfilled positions by 2020-2021. That’s a huge number of positions available, especially when you consider the quickly approaching retirement years of more seasoned experts.

Fight the shortage by increasing your employee retention

Enticing your best employees to stay with your business isn’t just about a pay raise — although that may help. Offering a challenging environment that recognizes and encourages employees’ personal goals is one way to increase their loyalty to your business as well as their value as employees.

Cross training for people who are interested in shifting careers to areas of need can also help pull from a more accessible pool of talent. Not only do you know that these employees are a good cultural fit for your enterprise, but your investment in their future also demonstrates care and interest in employee success. Employees win by getting a new skillset and the ability to transition to a new position, and you win by filling a difficult position while opening one that might be easier to replace.

If you have older employees, enticing them to stay into their retirement years could be as simple as setting expectations of their role in the company to train new IT experts — ensuring that no knowledge is lost — and mentor recent hires. Knowledge-sharing programs are another way to encourage your team’s older members to engage with a younger generation and inspire new ideas from older points of view. Adding flexible hours, flexible vacation, and sabbaticals is another way to entice older (and younger) employees to stay.

Working externally to handle the talent shortage

Operating with leaner teams might not seem like an ideal solution for the shortage, but it may be your best option for now — especially if you’re struggling to fill key positions. Automating processes with artificial intelligence or working with outside vendors to manage your infrastructure, general IT services, and security can help lessen the burden while you search for the perfect fit for your team.

Speaking of perfect fit, sometimes “nice-to-haves” in your job description are just that: nice to have. Instead of looking for specific skills or certifications, look for people who demonstrate the willingness and the talent to learn and step out of their comfort zone. This goes back to keeping current employees: Invest in and encourage their education in IT fields. Offer lower-stress mentoring positions to seasoned professionals who are stepping away from the workforce. Try to avoiding burning bridges with outgoing or retiring employees.

Whether you change your employee perks, rely more on automation or outsourced staff, or both, tackling the IT talent shortage can still be difficult. Taking a hard look at your processes and your sourcing strategy may be the key to lightening the load on your current IT team without compromising your capabilities.

Finding good IT talent can be tough in an ideal market. But with today’s IT talent shortage, filling key positions is harder than ever. Contact the Windsor Group for advice on searching for qualified IT professionals.

Read More
Windsor Blog Post Image

“Mainframe” doesn’t make many headlines these days. This could be because mainframes are a relatively “traditional” way of hosting large IT infrastructure and data processing. Another reason might be that innovations in the space are starting to slow as enterprises look to the cloud to replace these once-critical components of a business.

Read More

Popular Posts