Author Archives: Shagul

  • -

Outdated And Unsecured – Your Application Is A Sitting Duck

Companies with revenues in the millions and even billions of dollars face the risk of running outdated software applications. Why do they continue to do so? Are the risks only restricted to outdated software? 

Applications built using the latest stack also risk being vulnerable when appropriate security measures and policies are not followed. Facebook was under scrutiny for storing millions of passwords unencrypted. According to Facebook, the passwords were not stolen. The incident did result in a loss of trust which further exasperated the situation at Facebook. Facebook was fortunate that the passwords were not hacked. There are many instances where encrypted and unencrypted information were stolen. Such was the case with LinkedInYahoo, and many others.

In this article we will focus primarily on the topic of outdated applications and how to approach them.

One major reason that organizations continue to run outdated applications is the cost involved and the return on investment (ROI). Companies see upgrading as not a worthwhile expenditure unless there is no other choice. The organization may be using agile and lean methodologies to build a software. However, they may not necessarily end up with a product that can stay agile and lean.

Stay Relevant

The decision to stay current is largely driven by the culture. An organization that is enthusiastic about the trends in the industry will try to analyze what is worth adopting and go for it. 

When responsive sites were becoming popular, it was an easy win for many. It was like upgrading the upholstery. There was definitely a cost involved but was much less for the new look and feel of the same legacy application.

So, how do you convince your management that your software needs to stay current? Your organization can either be motivated by the new capabilities or be threatened with the consequences of non-compliance, security breach etc. 

The threat factor usually does not work well. The news cycle has so many of those threat factors. The Equifax hackDoorDash breachCapital One breachTarget breach and many more fills our news on a regular basis. Many organizations do not understand their vulnerability. Even if they understand it, it is a complex problem to analyze and mitigate the risk.

If your application has reached a point where upgrading is a high-risk project, remember that not upgrading is of a much higher risk.

The application may be so important that you can’t risk disturbing it. Or it may not be important enough for you to care about it. Either way the risk still remains.

The risk with running outdated applications is that there are a lot of known and exploited vulnerabilities already in them.

How do you bring about change in an organization? Let’s look at a few things that you can focus on to bring attention to the problem.

Be More Specific

  • “There is a new security patch to install. When can we do it?” Vs. “A regular user can impersonate and perform admin functionalities. We have a patch. We need to apply it immediately.”
  • “The next version of software has a lot of good features.” Vs. “The next version of software will allow us to deliver content seamlessly across multiple channels.”
  • “Upgrading to the latest version is going to be a major effort.” Vs. “Upgrading to the latest version is going to take 6 months and cost $500,000.” 

A security alert notification that I received in January 2019 read, “Users with User.VIEW permission can update other user’s password.” The implication is if someone can view your profile information, they can also update your password. What risks are you facing? If you have an SSO with a third party identity manager and that is the only way users can access the affected system, your risk may be low. If not, you are taking a big risk by not applying the patch. A hacker can gain admin access by changing the admin password. 

Equifax security breach due to the failure to patch a vulnerable software resulted in millions of users information being stolen. How do you keep track of such security flaws in a 9 year old library with the flaw being reported only around the time of attack?  There are some vendors who continuously track vulnerabilities and can help generate reports for your application. This will only work on reported vulnerabilities. It is not effective on a Zero-day-Exploit like Equifax where the attack happened even before the vulnerability was publicly reported.

If your application is a moving target rather than a sitting duck, it could help reduce such attacks like that of Equifax. It should be a high priority to keep your application current.

Make It Clear

We all at some point or the other might have heard this. A lasting change comes from within. This applies to individuals, groups, and organizations.

When the drive is not from within the organization, all outside compliance requirements are not that effective. If a compliance requires that you take appropriate backups, a diligent organization will also make sure that the backups are useful. 

You cannot force your business leaders to upgrade if they see no value in it.

What are the advantages and disadvantages to upgrading? More importantly what are the direct and hidden costs. How do you mitigate the risks?

Are Your Clients Happy?

In certain scenarios a major client of yours may mandate that your product is built on current technologies. In this case, there is a financial consequence of losing the client if your product is built on older versions of software.

Do you have a competitor who is offering the state-of-the-art product that uses all the latest advancements in artificial intelligence, machine learning, and personalization and also costs less?

Why not experience new innovation?

You should have innovation labs where your team can explore some of the latest technologies and products.

Talk To Others

I have never heard from a vendor that their new product is very challenging.

Are there other clients who are willing to talk about their experience with the new version of the software? If you really want to hear the true story, try to reach out to other users directly without a sales and marketing pitch from vendors. Many market research papers such as Gartner and Forrester provide valuable insights. The concerns that they raise can be extremely valuable.

Cost

Often organizations continue to pay a high price for old technologies when the new subscription model could save them more. Identify the cost differences and the options available. 

You may not be able to tell this to your boss

In a very sensitive environment, a security breach due to outdated software can cost the leadership their jobs. Even if leadership can blame a breach on a systems engineer who did not apply a patch, it is the leadership where the buck should stop.

Build The Right Skills

Are you paying attention to the needs of your team? Are the skills that team has going to become obsolete? Should you invest in your team or risk losing the best? Look out for the soft signs from your teams and see if they are becoming disengaged and disillusioned working with outdated tools and products.

A disengaged team is not going to be able to identify the risks and take proactive steps to mitigate those risks.

One of the major challenges that many businesses face today is that they do not have the people with the right skills to either upgrade a software product to the latest or to analyze the impact of upgrading. 

It is important to build the necessary skills to either be able to do it yourself or at least know when you need help and what type of help.

Any organization that is able to afford a software team should invest in making sure that the team has the capabilities to do it all themselves.

It is perfectly fine to get outside help. But you should not be completely at the mercy of another organization to make you successful.

Upgrades of any major piece of software require preparation and planning. You should expect challenges especially if you have spent a significant amount of time building and customizing the application.

Conclusion

Being current does not mean you can stop monitoring your applications and networks for security threats. Being outdated means that you are risking being a sitting duck that is waiting to be breached. 

Do not stop here. It is time to reevaluate what you have and protect the data of those who have trusted you with it.


  • -

Are You Struck By The Glassdoor?

The blow from anonymous Glassdoor reviews can have serious and disproportionate consequences on small businesses. It may feel like being struck by a revolving glass door. Bad reviews make it hard to hire the best people. Many potential employees review a company before joining them. The same applies to clients who want to know whom they are doing business with. As for the big companies, the reviews may have some effect but not as consequential as to the small companies. This article illustrates our approach to organizational culture building.

Bad reviews deliver a double blow to small companies and a slight dent to big ones.

Some of the reviews are a result of missed opportunities within the company that failed to have the most important conversations with their employees. A few others are systematic failures in the system where many things go on without proper checks and balances. Some percentage is due to failure of organizations to promptly act on the reports by employees.

People who write scathing reviews want to send a message to the companies and future candidates. 

It may not be the most effective way to bring changes in a company. It may serve as a warning for the potential candidates who want to join the firm. A review intended to punish the company will barely have any impact on the leadership. The one who wrote such reviews will be dismissed as a disgruntled employee.

Is Anonymity a Gift?

Anonymity is critical when employees fear some kind of reprisal from the employer whether it be former, current, or future. It is easy these days to pull up someone’s name and see how they behave online. Anonymity has its benefits. However, anonymity does not mean we have the right to irresponsible comments and reviews.

Anonymity has its benefits, but it does not give us the right to irresponsible comments and reviews.

According to a Career Builder survey 70% of employers snoop on candidates. This brings up a very important issue. Every one of us has the utmost responsibility and accountability to everything we say and write whether it be anonymous or with our name attached to it. As candidates research potential employers, employers research their potential employees as well.

In case of any violations that require a legal remedy, affected parties should not hesitate to report to authorities. Do not stop at Glassdoor.

What Are The Common Complaints

I would like to discuss some of the common concerns in the Glassdoor reviews of Fortune magazine’s 100 Best Companies To Work For (2019) list. I will deliberately stay away from telling which specific company the review is about and focus more on the concern than a particular company. Many of these concerns relate to the top ten from the 100 Best Companies list.

Personal Concerns

  • No balance between work and life
  • Insufficient compensation 
  • No proper career path
  • Company specific skills that cannot help with transition
  • Annual pay raise is minimal
  • Not enough perks
  • No appreciation from leadership
  • No one listens

Some of the above concerns are relative. One may complain about no lunch and the other may complain about the quality of lunch. Some are serious concerns. It may not be possible to have a work-life balance without the support of the managers and employers. It is important to invest in people if employers expect a certain kind of commitment.

A sense of urgency cannot be expected from employees when the companies leadership is laid back.

Organizational Concerns

  • Promotion is by favoritism
  • Leaders hiring less qualified buddies from former companies
  • Bad managers are a major source of concern

Unqualified leadership do tag team and migrate from one organization to another organization. It should be a serious concern.

These are serious concerns. Organizations need to take these accusations very seriously and find ways to remedy them. It needs to start from the top. If CEOs encourage favoritism, they can’t expect those who work under them to do the right thing.

Leadership Concerns

  • Task managers with poor people skills
  • Leadership is detached from realities
  • No inspiration or motivational leadership
  • Inaccessible leadership
  • Unresponsive management
  • Incompetent leadership
  • Arrogant leadership
  • Lack of mentorship

Leadership is not determined by title but by qualities within oneself.

Leadership concerns are not just for the top level executives. Every single person is a leader in his/her own capacity within the organization. Appropriate training along with continuous education may help.

Culture

  • Shame based culture
  • Blame based culture
  • Highly political environment
  • Laid back culture where no one cares about anything
  • Racism, bias, and discrimination
  • Complaints about reverse-racism (a majority race complaining that the minority is racist)
  • Lack of diversity (ethnic, race, age, gender etc.)
  • Unable to express opinion without reprisal
  • Toxic culture
  • Competitive and not collaborative environment
  • Erosion of culture with growth

Culture is the core of all organizational issues. Culture breeds good as well as bad practices. Organizations should have zero tolerance to racism, bias, discrimination, reverse racism, politics etc. 

How Should Small Companies Respond

Small businesses may not have the luxury to have a peoples department. Some aspects of people management such as payroll and benefits are usually outsourced. What cannot be outsourced is relationship management. Many managers and leaders are not familiar with identifying and having crucial conversations. People are afraid and not comfortable bringing up concerns with the leadership. In some cultures all issues end in gossip that the management may never come to know.

Can your employees bring up a sensitive topic without fearing a reprisal? If not, there is a lot of work that needs to be done. 

It is very important to build a good relationship between employees. It may help to read books such as Crucial Conversations by Patterson et al., and promote a culture where it is comfortable to discuss critical issues without the fear of getting fired or losing a highly skilled employee.

Organizations cannot stay in business if their only goal becomes appeasement of employees in order to get great reviews. The goal should never be that of getting great reviews. 

The goal should always be to become the most cherished place where people love to come and work. 

Everyone is in the business to make money. You cannot make money with a group of unhappy and unmotivated people. It is better to shutdown the business than to insist on squeezing work out of people. Unhappy employees are not good for business. 

If employees are incompetent, do not pamper them. If the employees are competent, do not ignore them.

How Should Large Companies Respond

How should the large companies respond to anonymous reviews? Everything that is said about small companies apply to large companies as well. Large companies often have the capability to bring in the necessary leadership and training to help with building relationships. Large companies should continue to pay attention to what employees say online. Identify the common complaints and address them promptly. It is important to communicate and be transparent.

You can only sweep so much under the rug before it becomes obvious. Companies should have higher moral standards. 

Encourage people to have crucial conversations without the fear of reprisal. Reward employees for introducing positive change in the company. Help employees find a higher purpose in what they are doing.

Conclusion

Let’s acknowledge that we all are human beings. We all are learners for the rest of our lives. We need to treat each other as fellow human beings and not anything less. People often have to unlearn the bias and discrimination that they witnessed growing up. The very same people go on to work in various companies at various levels. Leaders have to lead by example and build a positive culture – a culture where it is appreciated to have the most important conversations. 


  • -

Digital Transformation -Stay Ahead Of The Curve

Digital transformation is real. If we do nothing it will result in what could be a digital stagnation. Transformation is not a final goal but a continuous process of improvement. The word digital was first used in the 15th century. It took a different meaning with the invention of computers and the first electronic digital systems in the early 1930s and 1940s. Digital transformation has been happening over the last 100 years. 

So, what does the digital transformation in the 21st century really mean?

The digital transformation at this time stands for how we gather data using modern technologies to gain insight in order to better serve and engage customers, employees, and businesses. We could extend that to serve humanity as a whole.

The undeniable fact is that we produce and gather more data than we are able to comprehend and analyze without the aid of machines. Is this good or bad? It depends on the quality of the data that we collect and the benefit that we derive out of it to overall serve people better. Excessive data collection and consumption could become an issue just like the plastic waste that we produce. People are losing their lives while trying to capture the very moment of their lives in the selfie culture.

Digital transformation consists of various components. Some of them are IoT, 5G, AI, blockchain, NLP, ML, big data, analytics and cloud native applications. The rest of the article will primarily focus on the software application aspect of this transformation.

Technology Leaders

Amazon, Apple, Google, Microsoft, and Facebook are continuously inventing to solve their business problems. What we call digital transformation is something the big companies have been doing for a long time. Digital transformation is a process and not a final product. Most of us are only catching up with the pioneers in the field. If we are too late to catch up, it could become the dreaded legacy that no one is proud of. As an example, Netflix OSS dropped their own tool Hystrix for Reslience4J. Another example is that of Google which stopped selling their Google Search Appliance (GSA) product in 2016. Many organizations that invested in GSA had to quickly transform their businesses to use Elastic, Solr or other proprietary search engines.

We need to understand that these companies are not trying to build frameworks for the sake of building frameworks. They are solving the business problem that they are facing. If the solution is irrelevant they move on.

Digital Transformation Failures

Many digital transformation projects fail according to a Forbes article in 2016. The same sentiment held true in a Harvard Business Review article in 2018 and even today among business leaders.

A major university in the east coast with tens of thousands of users spent three years implementing a digital transformation solution for their students. By the time they finished their product, they were already in a legacy system that approached the end of support and needed an upgrade. The choices that they were facing were either spend another year or more upgrading the system, or build a new system. The transformation product that they choose already became a legacy and not the right solution. Three years is a lot of time in the era of digital transformation. In that timeframe, anything we build could become a legacy.

It is inevitable that all organizations need to transform. It is a challenge that many are not able to succeed in.

Fortune magazine did a special investigation report in early 2019. According to that report more than $36 billion dollars were spent over a 10 year period to digitize health records. It was a major failure. It is unfortunate that some of those errors resulted in the loss of lives.

Digital Transformation Success

There is a centuries old proverb in Tamil which says “Even when throwing (spend money) in a river, measure what you throw.” It applies greatly in the age of digital transformation. Many organizations throw their money into the river hoping that their misfortune with technology will change. It does not work that way. Intuit  is one of the companies that has seen some great success with their digital transformation. A company that helps people keep their books does seem to know how to keep their books right.

There are a few things that companies can do to have a successful digital transformation.

  • Usability. Anything that we build needs to intuitive. It should not require 4000 mouse clicks a day for a physician to perform a shift in ER. We can’t expect our users to be developers who know how to work around the system to get their job done.
  • Accountability. Often teams blame others including the product they spent months choosing, or the vendor that they vetted through RFP process for the failure. We need to own the responsibility. 
  • Measure as we spend. It is important to look at the returns whether it is short term or long term and be able to justify the cost. 
  • Small steps. We need to learn to walk before we can learn to run. Identify some areas that can be transformed and make it a model for the company.
  • Assess before adopting. A lot of the technologies out there are to some extent a fad or hype. Don’t be afraid to look under the hood. If you need help, choose a trustworthy partner. Organizations often choose the wrong product or platform and then try to find a partner who could help them transform.

Stay Ahead Of The Technology Curve

It is important that we always look at the problem that we are trying to solve and understand why we are doing what we are doing.

Many organizations may not have the necessary means to invent their own transformation tool. It is perfectly fine. Everyone doesn’t have to invent their own plane in order to fly in it. The key is to understand the purpose and limitations of the tools. 

REST API became very popular since the early 2000s. Along with that came the Javascript frameworks which made it easy to build applications. Teams started building apps that made dozens of request to update various sections of a single page. This soon became a major issue with all the overhead that comes with every single request. Solutions such as Websockets were proposed to reduce some of the overhead. Even though the data can travel at the speed of light, we share bandwidth just like roads and bridges. 

How did some organizations solve this problem? Some organizations built view optimized tables, caches and even relied on flat schema search engines such as Elastic to reduce the number of requests. Facebook solved this problem by building their own solution and called it GraphQL

Is GraphQL a final solution to all problems? It is definitely not. Now you have to scale the GraphQL server in a similar way you would have to scale any of your database and applications to support all the traffic. We just introduced one more layer to the problem.

What did AWS do with GraphQL? They found an opportunity to take this and make it into AWS AppSync. AppSync relieves the end users of the pain and effort of maintaining another layer. Organizations that are early adopters often may face the challenge of doing everything themselves. The companies that are at this juncture should evaluate which path to choose for a successful digital transformation. Should you spend years building you own or find one that saves you years of effort?

The same applies with Kubernetes. Within a short time of releasing Kubernetes, a whole bunch of companies popped up to tell you how Kubernetes can be made easy and painless so that you can focus on solving your business problems. 

Choose wisely a platform or a cloud solution that lets you focus on the business rather than building massive technology teams. At some point an organization may find itself at the crossroads of becoming the pioneer. If it happens, let it be so. Don’t be afraid to pick up the baton. Thirty years ago, some of the major players transforming the field right now were either too young or yet to be born.

Conclusion

We are in a time where a team may be relying on tools and contributions from Microsoft, Google, Amazon, Facebook or Netflix to build a single solution. It is perfectly fine to work with multiple technologies. We are solving business needs. If we keep our focus on the goal, the technology becomes as an asset rather than a liability. Digital transformation is not a goal but a continuous process.


  • -

CMS: Hybrid, Decoupled, Headless, or Clueless

Content management systems (CMS) are as ubiquitous as food carts in major cities. Most of them find a niche to survive. We have many flavors of CMS. Depending on your needs, implementing a CMS solution can be trivial to more complex.

The traditional CMSs that added API capabilities are called the hybrid. The new ones that built their delivery mechanism separate from the backend system are called the decoupled. The ones with API as primary interaction mechanism are called the headless. Headless stands for anything that doesn’t use a graphical user interface for user interaction. The term headless has been around well before it became a buzzword in CMSs. Software applications such as word processors, pdf generators, image processors and browsers have long been used in headless mode by developers.

What Choices Do We Have?

Let’s start with the headless buzzword. A quick search for headless CMS brings up one interesting article in CMS Wire that explains a couple dozen players. The URL for the article starts as 13-headless-cms but goes on to list 24 of them. This is not an exhaustive list. There are many more CMSs and digital experience platforms that provide hybrid and decoupled platforms as well. A few of them include Episerver, Adobe Experience Cloud, Sitecore, Crafter CMS, Liferay, and Jahia. We also have many more PHP based solutions such as Drupal and others.

The other major CMS in terms of market share is WordPress. There are more sites built on WordPress than any other single platform. Organizations of all sizes use them somewhere or the other. WordPress also has REST API that could be leveraged to create single page applications and deliver content to various devices. WordPress could be considered as a hybrid CMS with the API capabilities. Jetpack provides security, backup and performance on top of standard wordpress installation.

The concept behind headless, hybrid and distributed are not new. CMSs have provided these capabilities for a long time. These terms have become the buzzword as we are moving into the fourth industrial revolution with connected devices.

How Do You Choose?

There are many factors that come into play when it comes to choosing a CMS. The basic requirement is that it has a robust way to create and deliver contents of various types. In addition, we could look into the following listed reasons:

1. Purpose

What are you building? Answering this question will help choose the appropriate CMS. In a startup economy, not everyone needs a complex system with a high price tag. Also, not all sites need all the capabilities. Based on your need you can definitely find the one that suits with price tags from a few dollars a month to thousands of dollars a month. More and more vendors are supporting businesses of all sizes with their SaaS offerings and pricing model.

2. Ecosystem

Developers who are already into a certain ecosystem such as Firebase may want to go with something like Flamelink that is already built and billed for such environments. Those who are using Heroku may want to start using something like Butter CMS or Elegant CMS as these vendors target that niche. A more robust enterprise implementation may want to focus on enterprise scale systems that have both self-hosted and cloud solution (SaaS).

3. Content Delivery / Caching

Speed and distribution are basic requirements for all CMS solutions. Deployment and delivery also depends on what you are building and where your audiences are located. Are you trying to build a site/app for a museum, a sports site, a news media, or an intranet for distributed workforce around the world? It is easy to distribute static assets that are publicly accessible to various channels and regions. Assets that require permissioning needs advanced capabilities and may benefit from distributed decoupled CMS.

4. Security and Audience Targeting

When it comes to users and roles a lot of these systems focus on content creators and not content consumers. If you are building a site for the general public, the only access control that you may need is for authoring, editing, and publishing contents. Once the contents are published they are public and often have no restrictions on who can view them. 

If you have highly permissioned, sensitive contents with targeted audience a better choice is to look into digital experience platforms that offer a hybrid/decoupled model along with all the enterprise features.

5. Robust Search

Robust search capabilities backed by search engines such as Elastic or Solr are key requirements for most major implementations. The search needs to be powered by strong support for taxonomy and permission models. 

6. Whom are they Competing with?

Another way to identify the capabilities of the CMS is to look at whom they are competing against. Are they claiming themselves to be better than WordPress, Drupal or other enterprise vendors? This can give you an idea, especially when you are not familiar with the product that you are looking into.

7. Cost

Cost is always going to be a major factor when choosing a product. As an example, Butter CMS focuses on a number of blog posts to determine the pricing. This may not be a great pricing option to build a public blogging platform. Some platforms may require thousands of dollars per month in investment which may include subscription, licensing, and development costs. The amount of money an organization spends often depends on the size of the budget. It does not always correlate with the need unless they are employing lean methodologies.

8. Find Out Who Uses These CMSs

This is a little tricky one. Many vendors may list the same customer as their client. In large organizations, different departments tend to have their own procurement processes. It is possible that a large company may have multiple solutions across various departments and locations. It is also possible that the listed customer was once a client of the vendor but not anymore.

9. Support

If you are planning on a major implementation, support is a very key factor. The sales team often goes away once you make the purchase. How do we know if a vendor has a good support service? It could be a challenge to get unbiased opinions from references. It may help to talk to other customers at conferences, roadshows or by directly reaching out to them to understand their experience. The happiness level of employees are also indicators of company culture. Don’t hesitate to look at Glassdoor reviews.

10. A Trusted Partner

Any major implementation needs a trusted partner. A partner does not have to be a major firm. It could one or two people who have the expertise to sort through the pile. It is key that you are able to look beyond the buzz and hype.

Conclusion

Identify your need. Identify your budget. Identify a few vendors using some or all of the capabilities listed above. If the product is an open source, download and see the capabilities for yourself. If it is not an open source, ask for a trial. Build a quick proof of concept. Choose the one that meets your criteria. If your team is committed and happy with the choice, that is a sign of a great beginning.

Published: July 25, 2019


  • -

Serverless Computing: A Universal Fit

Category : Development , Serverless

Many organizations are pushing for serverless architecture in every aspect of application development. Serverless is not just functions as a service. It basically consists of everything from handling request, data, notification, authentication, authorization, and more. More importantly, it has become a solution for companies of all sizes.

What Is Serverless?

Serverless applications do run on servers. It is just that the users are not the ones who provision and manage them. Computing resources are spun up to do a job for a very short time and are released for others to use. Cloud providers have built a vast amount of capacity and continue to do so. The idea is that not every single customer is going to need the maximum capacity at the same time. If it does happen, there can be a service outage and a blackout. This is something that the serverless service providers always factor into their ongoing effort. 

Serverless also differs from all the other offerings by the way the cost is calculated. In a serverless model you pay for the commodities by per usage at a much more granular level. The usage is usually driven by an event and the amount of resources it consumes during the processing of that event. It is common to run a function for less than a second and to pay for every 100 milliseconds of usage as is the case with AWS Lambda pricing.

Scale Up and Down Without Any Commitment

Serverless providers guarantee scalability to most of their service offerings. This is the most attractive part for startups and startups within an enterprise. This allows the teams to innovate faster without a major investment in infrastructure. 

Serverless As First Class Offerings

AWS has something called the Spot Instances where you could use unallocated resources for a much reduced cost with the understanding that it could be interrupted anytime. Spot instances are much more complicated than serverless offerings. Serverless services come with the promise that they will be able to scale as needed without interruption. This promise makes serverless a reliable first-class service for building critical applications.

Serverless Containers and Kubernetes

Azure and AWS both offer container services in a serverless mode. The idea is that Kubernetes is not for everyone to deploy and manage. Businesses should focus on deploying and scaling the applications without having to manage the underlying processes that help with scaling. Azure Kubernetes Service describes itself as serverless Kubernetes. AWS has AWS Elastic Container Services (ECS) with two modes of operation. For those who want more control, it can be run on EC2 mode. And for those who are not concerned about granular control, Amazon provides AWS Fargate, which is a serverless mode for containers.

Authentication In Serverless 

Authentication in serverless is billed a little differently in terms of units of charge compared to traditional Identity as a Service (IDaaS). Serverless services such as AWS Cognito charge based on monthly active users (MAUs) and not based on requests. Other identity providers such as Okta, Auth0 have similar models. The difference from serverless pricing and that of Okta/Auth0 is that they offer tiers such as up to 1000 MAUs, 2500 MAUs etc.

Upgradable Application Design

Various upgrade nightmares are primarily related to the tight coupling of the services and underlying dependencies to other services. If applications can communicate via https and consider other services as blackboxes, it will significantly reduce the upgrade nightmare scenarios. Such a design does come with an upfront effort. The traditional model of Rest APIs’ is evolving. If you are making a lot of API calls from browser to server for single purpose, consider building applications with GraphQL. GraphQL acts as a layer between APIs and data sources reducing the number of calls to get the desired data. GraphQL has also gained some attention in AWS, Azure, and GCP communities in the serverless model. AWS provides AWS AppSync as their GraphQL engine while there are third party plugins and support for GraphQL on Azure.

Serverless Edge Computing

Edge computing is the next major application delivery model that has gone serverless. Every major cloud service provider offers edge computing capabilities. These are primarily geared towards the IoT devices. Edge computing is becoming critical especially for the businesses that have global user bases. Edge computing can be considered as an extension to the content delivery networks that we are used to. The main difference is that your serverless code runs close to the users.

Go Innovate

It does not matter which cloud provider you use. The key to success is the ability to innovate fast. Application delivery is not like building a plane or space program where fast usually means a few years. In application development, teams are building and deploying applications in a matter of weeks. If you are new to serverless, start with functions and offload some of the workload. Then slowly expand to other services.

Serverless offerings are geared towards businesses that are startups or would like to act like startups and innovate faster. It is not a surprise that big companies are already spending millions of dollars in serverless services. The actual choice of what to use will require more than just the cost factor. Each and every business and their executives need to evaluate the right model based on cost, talent pool, and more importantly the organizational culture. Make your team the most valuable equation of all solutions. The industry is changing rapidly, but the core principles and need to solve the problems we face remains the same.

Published: July 8, 2019


  • -

Heroku: An Awesome PaaS Platform

Category : Development , Microservices

The days that companies measured productivity by the amount of time worked and number of lines of code written are long gone. In the fourth industrial revolution, businesses do not have the luxury of spending years on developing products. The audience and market are moving so fast that companies have to continuously innovate and deliver new products and services to stay in business.

Cloud services providers who offer various “as a Service” products also have to continuously evolve. We are in a cloud native microservices and serverless era. One such “as a Service” product is Heroku. Heroku is a Platform as a Service (PaaS) founded in 2007 and acquired by Salesforce in 2010. There are dozens of alternatives the various “as a Service” offerings that are available. Some of them need expertise while others leverage existing talents. We will keep our focus on Heroku for this article. Heroku is one such product where a developer who knows how to commit a code can deploy a scalable application with a single commit.

Many of us might have heard about two pizza teams and how you should keep the number of team members responsible for a service to a minimum. With Heroku, you can start with just a couple of engineers and add more as needed. A lot of focus over the past few years has been on building a minimum viable product (MVP) made popular by the lean movement. The time period we are in is even more faster. Why just build an MVP when you can build a great application with a team focused on business need and spending less time on platform, infrastructure, and even DevOps?

Should You Orchestrate Containers?

Businesses no longer have to buy servers and install operating systems. If you are still doing that, it is like living in the era of CRT monitors and manual typewriters. Provisioning a server with the OS and all the libraries is just a click and a few seconds away. In a similar way, running scalable applications should not mean every business needs to learn how to deploy, orchestrate and manage the containers in every single cloud service provider. 

Heroku DevOps Support

With Heroku, a single commit  by git push heroku master‘  can deploy your Java, Ruby, Python, Go, Node and many other languages into scalable cloud infrastructure.

Heroku provides various DevOps support with its toolsets. Heroku has its own tools such as the ones listed below:

One of the core contributions of Heroku to the community is the buildpacks. Buildpacks make it easy to deploy applications. The auto-detection, build, and deployment of applications means you could literally deploy your code to a staging environment even without having all the SDKs installed locally. The point here is to emphasize the simplicity of buildpacks and not a recommendation that you develop without the right tools. If you have to make a small change to your code, you could even edit and commit directly from your Github and trigger a deployment via pipelines.

Heroku teamed up with Pivotal in 2018 to come up with Cloud Native buildpacks. More on this can be found at https://buildpacks.io/.

Productivity means developers focus on what is more important to satisfy the business need. If a business does not adopt the best practices, it only means that they get less benefit and more frustration out of their team.

Heroku is available in various regions and via pipeline you could easily deploy the app to multiple regions.

Heroku For Compliance

The businesses that require HIPAA compliance, PCI compliance and other compliances can make use of Heroku Enterprises. Heroku provides private spaces that are only accessible via VPN and within VPC. Also Heroku provides PrivateLink to AWS resources. Heroku itself runs on AWS servers and is available in many AWS regions. Heroku may or may not continue to run on top of AWS. But the commitment from Heroku to support VPC/VPN connections to other cloud providers’ resources and on premise seems strong.

Heroku Ecosystem

Since Heroku is a product owned by Salesforce, their obvious focus is on providing integration to Salesforce (CRM) platform and associated products. Salesforce provides Heroku Connect to synchronize data between Salesforce and Postgres database in an enterprise deployment.

Microservices in Heroku

You can run applications in microservices architecture within Heroku Private Spaces or in the much more affordable public spaces. It all depends on the type of the applications that you are running and the compliance need. If you are building applications in Java and are interested in Spring Boot, Spring Cloud Services you could look into JHipster. JHipster provides various tools to build and deploy applications to various cloud providers including Heroku. Spring Cloud relies a lot on the Netflix OSS tools. Netflix may discontinue a project or put it on maintenance mode. Spring cloud has its own release cycles and sticking to them will make sure that you are able to focus on your applications and not worry about figuring out which dependencies are compatible.

Where Do You Start

The focus of this article is to introduce readers to Heroku. You could start by building, deploying and running applications for free. You could choose the computing engine that suits your needs starting at $7 a month per container/dyno. 

The best place to start is the Heroku Dev Center. Choose your language, go through a quick tutorial, deploy your application, and feel the power. It is definitely a great experience to run and scale your application without having to worry about SSL, load balancing, OS, sdk, networking, orchestration or setting up monitoring. Heroku provides a web dashboard as well as a powerful CLI tool. You can also find a lot of addons at Heroku Elements Marketplace to enhance your application. A lot of them are available with a free tier as well. 

Do Not Stop There

A blog on Dev.to comparing AWS and Heroku is a quick read on how Heroku provides added value compared to AWS.
You could benefit from the power of running microservice apps on Heroku along with the other services out there. You do not have to restrict yourselves to the Heroku elements. You could combine the power of Heroku along with AWS and other cloud provider offerings. As an example Okta developer provides a free tier with up to 1000 monthly active users to get you started with authentication. In one of the applications that we developed, we leveraged AWS Lambda function to recognize and parse documents (OCR using tesseract), and another function to support zipping documents for download. The idea is that you could build an awesome application that costs less than a $1/month per user with the ability to scale as needed. There are other competing products such as Pivotal Cloud Foundry, but Heroku is a developer-friendly place for small and medium size businesses to get started.

Published: June 27, 2019


  • -

Liferay: In A Serverless and Microservices Era

In the era of microservices and serverless architecture, it is essential to evaluate if you need to build or buy a software. A decade ago, there was still a lot of push back against virtual machines (VMs) for production servers as they were considered the cause of performance issues. One easy solution at that time was to scale vertically by adding more vCPU and memory. That is not the case anymore. Organizations have not only adopted virtual machines, they are also moving towards containers and serverless. Applications don’t have to be built and deployed as a massive piece of software anymore.

Modular Monolith

As tech giants were sharpening their skills related to microservices, containers and serverless, most businesses were still struggling with modular monolithic applications. The reason they are called modular monolith is because the applications are written as modules but deployed as a monolith.

How can you tell if you are running a modular monolith? In order to figure that out, you should evaluate a few points. The first is that if you are upgrading the application, all the services will be down during that process. The second is that if you need to scale parts of your application, you have to scale the whole application server. One more indicator that you should pay attention to is that your application demands that you use certain programming languages and constructs.

Trend

As of now, many businesses have adopted or are considering their options with microservices and serverless architecture. They do have their own challenges when it comes to orchestration, monitoring and management. It is a different problem to deal with than with the traditional modular monolithic applications. Many tools make it easy to deal with microservices and serverless architecture. Several of them support a single CLI command to deploy the changes to any environment. In the serverless area, Amazon has its own AWS SAM CLI while Serverless, Inc has a serverless framework.

Cloud technologies adoption is becoming common in sectors such as government, healthcare and payment card industries.

What Are We Solving? Is There A Business Driver?

The goal is not to adopt technology for technology’s sake. The goal should be to solve the business need in a cost-effective and timely manner while meeting the demands of ever-changing requirements. We need agility and speed while simultaneously running a highly available application. It is important to keep our eyes on the goal. If a modular monolith solves your need really well, you may want to keep it. But let’s not allow the love for our legacy applications keep us away from innovating in business and technology.

How Do We Adopt?

Adoption takes time. It does not have to be an all or nothing approach. The best way to start is to explore the options for some parts of your application. If you have to parse documents, scan documents, analyze text or video etc., it may be better to externalize those to use the serverless offerings than to deploy and manage those services yourself. Traditionally applications will run command-line tools in the same application server to perform a whole bunch of tasks. The issue with this approach is that you are forced to scale the whole application layer when you have to scale these other processes that are competing for the same resources. A piecemeal approach may be a good starting point.

Is Liferay a Modular Monolith?

The above prelude is important for the following discussion. It is very likely that some of the readers may not even have heard about Liferay. Liferay is a Java based digital experience product that is primarily used to build intranets, customer portals, dashboards and public facing sites. As a product, Liferay has been around for more than 15 years. I have spent more than a decade with Liferay since I was introduced to it in early 2007.

Liferay has evolved over the past decade, yet it may fundamentally look the same when it comes to the deployment architecture. Let’s quickly go over a few points:

  • Search. A decade ago Liferay was using Lucene by default and supported Solr and other engines for search and index of documents. Now it uses elastic search by default while providing support for Solr.
  • Database. Liferay runs on a relational database, but you could develop applications (portlets) that use their own datasources. This remains the same since founding. Liferay did remove some of the supports for database sharding at application level.
  • Deployment Architecture.  In a high availability environment, Liferay instances are deployed as clusters with all the instances sharing the same database and data storage. Plugins that are developed can share the same database or connect to external services and databases. The major change for enterprise customers is that Liferay has recently started supporting the elastic licensing model where you could increase and decrease the number of instances while paying only for the additional time. In earlier versions the request for license was through a ticketing system. This has changed since the introduction and evolution of the Liferay Connected Services plugin.
  • Vertical Scaling. It is very common for Liferay installation to demand a more powerful application server configuration as a lot of heavy lifting happens at this layer. As an example, all the document parsing, conversion etc. happens in the application server and only the indexed document is pushed to the search engine. Also, all the content images that are being stored are rendered and cached at the application server layer. It is technically possible to externalize the cache with external caches.
  • Plugin Portlet Development. Portlets are generally written in Java or a few Java frameworks such as Spring MVC, JSF etc. In the latest version of Liferay, you can write portlets using Javascript frameworks such as React, Angular etc. Liferay supports bundling the JS application and deploying it to the server or running it as standalone JS application using Liferay remote services. It may be very convenient to bundle all the Javascript and deploy it to the application server. Deploying in such a manner means your application server is also the web server serving a lot of Javascript and CSS.
  • Services.  Liferay has supported exposing the remote services via SOAP, JSON APIs for a long time now.

Various aspects of Liferay have evolved over the past decade, yet it may fundamentally look the same at a high level. Liferay deployments resemble more of a modular monolithic application when it comes to scaling and upgrade. Is there a way to address some of the concerns?

Microsites As An Option

One way Liferay solves some of the scalability and availability needs is via microsite architecture. As an example, we can see how Liferay addresses their own needs. Liferay.com probably started as a single application server which later evolved into a cluster. Various needs for separation of content and access were provided through communities and memberships in a single application cluster. One disadvantage with this approach is that you have to scale the whole system vertically and horizontally to support the growing user base. Another major challenge with a single application cluster is that an upgrade will affect all types of users, sites, and communities.

One way to solve the scalability and upgradability challenge is to run separate clusters of various microsites such as help.liferay.com, web.liferay.com, partner.liferay.com, community.liferay.com, dev.liferay.com etc. They all are tied together via SSO, but exist independent of each other. Typically, if your departments are big, they may want to manage and upgrade their own microsites. This would result in multiple versions of Liferay running within the same organization. This could result in the creation of the very silos that organizations are wanting to prevent.  As we all know, the organization’s culture will reflect in the way the teams talk to each other.

Build Or Buy

If you have the capability to develop greenfield applications, you should definitely look into your options that are not constrained by a platform. Do you need a blog or are you trying to build a site like https://medium.com? Is your need for service easily fulfilled by an existing platform or do you have to customize the platform heavily?

It all depends on your business needs. If you are going to spend a significant amount of time and money to customize a product, it may be worth looking into building greenfield applications. Also you could take a hybrid approach where your application leverages functions as a service and other serverless features as needed. If your application has a need to export a lot of files as part of the regular use, it would be wise to run this zip process in an AWS Lambda function or as a separate microservice in an asynchronous way. Running such processes in an asynchronous way within the same application server may not be suitable for your use case. It is better to free up your application server resources so that it can better serve other requests.

What’s Next?

I wish I could cover everything in a single article. But that would become a modular monolithic article. I hope to cover more on this topic in future articles. If there is something that interests you specifically, please comment.

Published: May 22, 2019


  • -

OSGi Adoption and Liferay

Category : Development , Liferay , OSGi

Where Do We Start ?

The goal of this article is to provide a perspective of how relevant the adoption of OSGi in Liferay is to a business executive. Is it really worth the effort to learn OSGi? There are many ways to develop an application. The are numerous frameworks in various languages such as Java, JavaScript, Ruby, PHP, Python etc. If you are a firm that does not deal with Java technologies, OSGi is not for you. If you have a Liferay implementation or are considering one, continue reading.

A Little Bit of History

OSGi has been around since 1999. Liferay has invested more than 5 years in the OSGi technology. The last two years probably were the most intense of all when Liferay started migrating most of the core portlets and services. Until recently most of the Liferay developer community could have easily ignored OSGi and continued to develop plugins the old way. The recommended approach going forward is to use the OSGi bundle, though a legacy deployment may still be supported.

Why OSGi in Liferay?

This is what Ray Augé had to say over the years. Ray has been leading this effort and is the key player behind this implementation.

“Liferay is a large, complex application which implements its own proprietary plugin mechanisms. While Liferay has managed well enough dealing with those characteristics over its history, it’s reached a point on several fronts where these are becoming a burden which seem to bleed into every aspect of Liferay: development, support, sales, marketing, etc.” – Ray Augé October 11, 2012

“Liferay is a complex beast with an intricate network of features, so many features in fact that they occasionally have such indistinct lines such as finding the where one feature ends and another begins can be difficult…The number of benefits is almost too great to list. However, one of the greatest advantages can’t be discussed enough: Modularity.” – Ray Augé Feb 4, 2013

The primary reason Liferay adopted OSGi is to easily manage Liferay as a platform. It is to make things easier for core Liferay developers. The key benefit is the modularity of the OSGi platform. OSGi allows the end user to easily add/remove/enhance services and offerings dynamically. The hardware industry has been following a very modular approach that allows us to add and remove components easily. There is a software piece to it as well which recognizes those dynamic components. Not all software is developed in such a way. Also it should not be the responsibility of the software but rather the platform which should enable such modularity. A capable platform ensures that we don’t have to reinvent the wheel on every piece of code. This is one of the promises of the OSGi platform. If you can tell what your piece of code does and someone can tell what their piece of code needs, the platform can match that up for you while you are still up and running. The OSGi platform does offer various other benefits that you can read up on.

How Relevant is OSGi in Current Architecture?

If you are a business that is using Liferay or looking to use Liferay, it is important that the team invest the time to learn the basic concepts of OSGi. OSGi has a very dedicated group of members who have given all that they have to keep it more relevant. It is very likely that Liferay will continue to use OSGi for at least another 5 years (estimate based on what it took to adopt the OSGi technology). So the time spent on learning OSGi while using Liferay is not a waste.

OSGi provides the concept of µservices within a single JVM. Liferay primarily relies on this feature. In order to stay relevant with the modern cloud architecture and distributed services, there are various OSGi initiatives such as Amdatu that embraces cloud computing.

You could entirely develop a full fledged application using OSGi. Just like you would with Spring Boot and Angular JS or any other JavaScript, PHP, Ruby, or Python frameworks. We could put it this way. Instead of saying, “How relevant is OSGi?,” we can say that OSGi is trying to be relevant by making use of all the new technologies. One thing that the community may lack is the funding and hype similar to the some of the platforms.

Is OSGi the Only Way to Build Modular Applications?

OSGi is probably the only best way to build modular and dynamic application using Java within a single Java Virtual Machine (JVM). The key thing to note here is the single JVM. The way software architecture is evolving indicates that the applications are built in a more tolerant way so that you could remove a server and add it back in a matter a few seconds to a minute or two.

The concept of changing class or implementation within a JVM is not relevant if you are already a shop that knows how to build and deploy your application to an elastic cloud. At that level you are elastically scaling virtual machines in a more tolerant way.

Spring Boot along with PaaS providers such as Pivotal Cloud Foundry and Heroku are an alternative for those developing using Java. OSGi Enroute is trying to provide similar capabilities where you can bundle up your app as a jar file and run it anywhere. Along with the pipelines offered by some of the PaaS providers, nowadays it is as simple as committing the code and the rest is taken care for you.

If you are familiar with some of the javascript frameworks, they do a whole lot of things without having to worry about the class loading issues. In fact, Liferay themselves are working on a similar PaaS called WeDeploy. As of now WeDeploy is in Alpha. Liferay’s interest in providing such as platform clearly indicates the effort to stay relevant and diversify the risks.

It all depends on what you are looking for. If you are a platform or a tool builder, it makes a lot of sense to use frameworks like OSGi. If you are a already running tolerant applications in the cloud, the modularity offered by OSGi is something that definitely does not concern you.

A look at OSGi Adoption

Eclipse IDE is one of the most successful adopters of OSGi that a Java developer comes in contact with on a regular basis. Spring tried to support OSGi but later dropped Spring DM due to the complexity. Glassfish abopted OSGi but later the project was discontinued by Oracle. Liferay has taken the major step of adoption and successfully launched the major version. There is still a lot of work to do within Liferay, but Liferay does provide good support for those developing against it. OSGi has moved further on with OSGi enroute and various cloud computing offerings. Liferay’s primary focus in OSGi adoption is the capabilities within a single JVM. In my opinion, by doing so Liferay has committed itself and has become a major player in OSGi for web applications. A continued success and user adoption within the Liferay community could very well provide the oxygen that OSGi needs in the web application platform.

Liferay has done the toughest part of OSGi adoption in its platform. For the end users Liferay provides various utilities to interact easily with the OSGi service trackers etc. It is sufficient to just understand the basics of OSGi to develop plugins for the Liferay platform. Developers could deep dive into OSGi as needed while working with Liferay. This is in a way similar to how developers using Liferay interact with services without having to master Spring or Hibernate.

Bndtools and various others OSGi frameworks such Apache Felix , Amdatu etc., help it make easy for the developers. There is still a lot of activity primarily supported by the OSGi Alliance members which keeps the community strong even after 17 years.

Conclusion

There are many ways to develop rich applications. If you are a shop that has invested in Liferay, then getting up to speed with OSGi will make your job easier. If you are not a shop that is invested in Liferay or Java, you could live without ever knowing what OSGi is. The one thing that OSGi lacks is the hype and support from the extended Java community. As Liferay is trying to be more than a portal, as a business you need to think beyond any programming language or a platform and evaluate what is best for you. Technology changes so fast. Instead of adopting technology for the sake of it, you need to adopt it to solve your and your customers’ need.


  • -

Marketing, Did You Get Sold?

Category : Marketing and Sales

I have been trying to understand the difference between marketing and selling. I am especially interested in how this works in the IT industry. Surely, we all can relate to various things that we don’t need or want but end up buying throughout our life.

American Marketing Association (AMA) defines marketing as:
“Marketing is the activity, set of institutions, and processes for creating, communicating, delivering, and exchanging offerings that have value for customers, clients, partners, and society at large.” – American Marketing Association

This is how one of the former professor at Harvard defined selling:
“Selling concerns itself with the tricks and techniques of getting people to exchange their cash for your product. It is not concerned with the values that the exchange is all about.” – Theodore Levitt, Former Professor at Harvard Business School

Quite often we can relate the success or failure of a project to the difference between marketing and selling. We will review some of the pitfalls so that you can spot issues early on. At the end of the day, the buyer needs to make sure that they are getting value and satisfaction for their money.

Pitfalls

1. Just Launched a Site With Millions Of Users

If you hear someone saying that they just launched a site that will serve millions of users, you would imagine that it is the next Facebook. It may very well be. If it is and you are looking to build a similar system, you are in the right hands. In certain cases it may not be. What your million user need may be very different than what the system that was built delivers.Say you have a need to build a site where millions of users may log in, search, and buy products. The need for this site is very different than the need for a site like health insurance or auto insurance websites. Both the sites may very well have the same number of users but the activity level and need  is very different. In the case of health insurance and auto insurance websites, once you set-up auto-pay for the next six months or a year, you may hardly login unless there is a claim or want to add another dependent. In case of the insurance industry, they may have the need to process and generate documents either on demand or offline which may be more taxing than generating an simple invoice.Most IT executives are experienced enough to ask the right questions about concurrent users, active users, daily logins, hourly logins, peak, number of transactions etc. But there may be some who may just go with their impulse and buy a solution only to later realize that this is not what they wanted. It would be a few million dollars too late.

2. Delivered Solutions To A Major Company

We lend our ears as soon as we hear some of the major companies names. If one of the Fortune 500 companies adopted this technology, solution, or chose a vendor then they are the right ones for me. It would be very simple if that were the case. A vendor may very well be the preferred services provider for a company and deliver all the IT needs. This is great news. In some instances this is not the case. A vendor might have delivered a very small piece of a solution to a major company and use their name to win something that they cannot deliver. Asking the right questions is the key. The questions that are relevant to your project may be different than the ones that are relevant to others. Instead of going behind the big names, look to see if the vendor has what it takes to deliver what you need.

3. Look At Our Awesome Testimonials

basis2_testimonialThis is an awesome testimonial and any city that needs a new billing system for their city could consider looking into basis2. The one thing that this does not tell is how even basis2, the fifth attempt, was eventually deemed as a failure after a year. After a total of $49 million dollars and five attempts, the basis2 system sent some residential customers $331K utility bill compared to $97 for the previous month.basis2_sample_bills_customer_receivedThe company that sold the solution was able to capitalize on a failed project. For those who are interested, you can read the reports from the city at the below links.City of Philadelphia Chooses Basis2City of Philadelphia Audit Report A Year Later

Things to Consider

1.If It Works, Keep It

How many times have you heard something like, “If it is working well for you, keep it?” Typically what you would hear from a salesman is you need everything that they sell. Quite often the commission is not delivered at the end of the delivery but much sooner. Many may know very well as to what happened with the Wells Fargo fallout recently and the pressure of sales target.

2. Never Shop Hungry

We all either know this or acknowledge the saying, “Never go shopping when you are hungry.” If you are in a desperate situation, it is likely that you will buy anything that you believe will satisfy the immediate need. A more proactive solution is to continuously assess what you have and plan for the future.

3. Don’t Feel Rushed

If you hear someone say that, “if you don’t decide now, it won’t be available later,” it means the seller is more in need than the buyer. If you like to be in control of what you are buying, never let external factors take that control away. The more I have taken the opportunity to think through what I am getting without rushing, the more satisfied I am. The key here is the satisfaction, even if you end up with something else later on. Let us not confuse due diligence with procrastination.

4. Trusted Partner

The difference between a trusted partner and a con man is as that of an anchor verses a fishing rod. One helps you from drifting away and the other one attracts you with a bait. If you find a trusted partner, continue to work with them.

Conclusion

There is a lot that can be written on this topic. Success or failure of a project starts with someone getting sold. If you don’t have a plan for your money, there are people who definitely have plans for them. We learn a lot through failed attempts. It is a much smarter thing to learn from others failed attempts than to insist on failing yourself.


  • -

Application Performance, A Never Ending Journey

Category : Development

Introduction

Application performance has always been a concern even as hardware and memory got cheaper over the past decades. Does it really matter when you can containerize pretty much everything and even run a container per user when they are online? The answer to that is yes. Dynatrace, Newrelic and other Application Performance Monitoring providers all have those markets covered as well. So, where do we start?

Tools

As the saying goes, a good craftsman never blames his tool. You could look at this in a couple of different ways. You make it work with what you have leveraging your expertise. You choose the right tool for the work when you have the option. Once the tool is chosen, complaining about it without the proper course or change of course of action only reveals the poor craftsmanship.

If all that you have is a rock, you could still make a spear out of it. It does not mean you ignore all the inventions and discoveries that came afterwards and get stuck between a rock and a hard place.

Design

No one has unlimited access to resources. Even if you have, it does not guarantee the performance. As explained by Amdahl’s law, the lowest performing link would limit your ability to achieve the output you want by merely scaling. It is just like trying to use a flour sieve to fill a container with water. Design your application for the task it is intended for.

As an example, if your application needs to process a lot of documents, parse them and index them, it may be better to scale that aspect of it without affecting the performance of the system. If you are buying a platform, see if the platform can easily support the scaling as shown below. If not, you need to plan for it with the tool that you have. Search engines themselves are capable of parsing, indexing and storing the documents.

Search Optimized Architecture

Quite often a system may support an external search engine but may not support externalization of document parsing. Depending on your use case, this may or may not be an issue.

Session vs Cache vs In memory Systems

Session was a great thing once upon a time. Everyone was excited about storing user interactions, data etc on a per user basis and replicating that across servers to provide a seamless failover. This architecture requires that you scale your servers vertically. Just like air fills the container, work expands to fill the time, the objects soon filled the memory. The advent of stateless architecture changed the way we viewed user interactions.

I have seen implementations where developers stored common datasets in a per user session. The data that was stored quickly filled the memory as more users logged in. A cache that can be shared across user sessions is a better option since most of that data was a read-only data. A lot of the systems are optimized at various levels. A database has its own caching mechanism. A second-level cache is supported by various systems. Some frameworks offer a third-level caching of complex objects. So, where do you start and stop? A one-size fits all may not be the option where the problems are unique.

I used the term In-memory systems to include a wide range of systems out there. It could be an in-memory database, search engine or any system that can be used to easily store and access data and has a persistent store from which you can rebuild the memory. This could be  a very good alternative to the other two means.

Upfront vs Continuous Process

Sometimes you have the luxury to continuously improve your system. If you have that luxury, that works very well with the Return On Investment (ROI). You could measure every optimization that you perform and keep improving.

You may not have that luxury sometimes. Your application needs to support certain use cases and a certain number of people. If it fails to do so on day one, it may fail to gain the trust of the users.

Design diligently. Perform the benchmark via load testing. The benchmark is a little trickier because you cannot expect the users to follow your load test use cases.

 

Performance Tuning

What about all the performance tuning, properties tweaking, environment settings adjustments etc? All these are useful but never substitute for a poorly designed system. You use the performance tuning as a checklist for the system that you are running. If your system was designed for a certain altitude, a mere tuning may not take you to the next height.

 

Conclusion

Don’t blame the tool. Use the right tool if you can find one. Don’t blame the design of a framework. Understand how it is designed and see if that is the one for you. If you are a team of experts and you know what you are getting into, you could make it work. If you are not a team of experts, find an expert.. Do the due diligence. If you would be happy to claim the success, don’t be afraid to take the blame and make it right.