Author Archives: Shagul

  • -

Serverless Computing: A Universal Fit

Category : Development , Serverless

Many organizations are pushing for serverless architecture in every aspect of application development. Serverless is not just functions as a service. It basically consists of everything from handling request, data, notification, authentication, authorization, and more. More importantly, it has become a solution for companies of all sizes.

What Is Serverless?

Serverless applications do run on servers. It is just that the users are not the ones who provision and manage them. Computing resources are spun up to do a job for a very short time and are released for others to use. Cloud providers have built a vast amount of capacity and continue to do so. The idea is that not every single customer is going to need the maximum capacity at the same time. If it does happen, there can be a service outage and a blackout. This is something that the serverless service providers always factor into their ongoing effort. 

Serverless also differs from all the other offerings by the way the cost is calculated. In a serverless model you pay for the commodities by per usage at a much more granular level. The usage is usually driven by an event and the amount of resources it consumes during the processing of that event. It is common to run a function for less than a second and to pay for every 100 milliseconds of usage as is the case with AWS Lambda pricing.

Scale Up and Down Without Any Commitment

Serverless providers guarantee scalability to most of their service offerings. This is the most attractive part for startups and startups within an enterprise. This allows the teams to innovate faster without a major investment in infrastructure. 

Serverless As First Class Offerings

AWS has something called the Spot Instances where you could use unallocated resources for a much reduced cost with the understanding that it could be interrupted anytime. Spot instances are much more complicated than serverless offerings. Serverless services come with the promise that they will be able to scale as needed without interruption. This promise makes serverless a reliable first-class service for building critical applications.

Serverless Containers and Kubernetes

Azure and AWS both offer container services in a serverless mode. The idea is that Kubernetes is not for everyone to deploy and manage. Businesses should focus on deploying and scaling the applications without having to manage the underlying processes that help with scaling. Azure Kubernetes Service describes itself as serverless Kubernetes. AWS has AWS Elastic Container Services (ECS) with two modes of operation. For those who want more control, it can be run on EC2 mode. And for those who are not concerned about granular control, Amazon provides AWS Fargate, which is a serverless mode for containers.

Authentication In Serverless 

Authentication in serverless is billed a little differently in terms of units of charge compared to traditional Identity as a Service (IDaaS). Serverless services such as AWS Cognito charge based on monthly active users (MAUs) and not based on requests. Other identity providers such as Okta, Auth0 have similar models. The difference from serverless pricing and that of Okta/Auth0 is that they offer tiers such as up to 1000 MAUs, 2500 MAUs etc.

Upgradable Application Design

Various upgrade nightmares are primarily related to the tight coupling of the services and underlying dependencies to other services. If applications can communicate via https and consider other services as blackboxes, it will significantly reduce the upgrade nightmare scenarios. Such a design does come with an upfront effort. The traditional model of Rest APIs’ is evolving. If you are making a lot of API calls from browser to server for single purpose, consider building applications with GraphQL. GraphQL acts as a layer between APIs and data sources reducing the number of calls to get the desired data. GraphQL has also gained some attention in AWS, Azure, and GCP communities in the serverless model. AWS provides AWS AppSync as their GraphQL engine while there are third party plugins and support for GraphQL on Azure.

Serverless Edge Computing

Edge computing is the next major application delivery model that has gone serverless. Every major cloud service provider offers edge computing capabilities. These are primarily geared towards the IoT devices. Edge computing is becoming critical especially for the businesses that have global user bases. Edge computing can be considered as an extension to the content delivery networks that we are used to. The main difference is that your serverless code runs close to the users.

Go Innovate

It does not matter which cloud provider you use. The key to success is the ability to innovate fast. Application delivery is not like building a plane or space program where fast usually means a few years. In application development, teams are building and deploying applications in a matter of weeks. If you are new to serverless, start with functions and offload some of the workload. Then slowly expand to other services.

Serverless offerings are geared towards businesses that are startups or would like to act like startups and innovate faster. It is not a surprise that big companies are already spending millions of dollars in serverless services. The actual choice of what to use will require more than just the cost factor. Each and every business and their executives need to evaluate the right model based on cost, talent pool, and more importantly the organizational culture. Make your team the most valuable equation of all solutions. The industry is changing rapidly, but the core principles and need to solve the problems we face remains the same.

Published: July 8, 2019


  • -

Heroku: An Awesome PaaS Platform

Category : Development , Microservices

The days that companies measured productivity by the amount of time worked and number of lines of code written are long gone. In the fourth industrial revolution, businesses do not have the luxury of spending years on developing products. The audience and market are moving so fast that companies have to continuously innovate and deliver new products and services to stay in business.

Cloud services providers who offer various “as a Service” products also have to continuously evolve. We are in a cloud native microservices and serverless era. One such “as a Service” product is Heroku. Heroku is a Platform as a Service (PaaS) founded in 2007 and acquired by Salesforce in 2010. There are dozens of alternatives the various “as a Service” offerings that are available. Some of them need expertise while others leverage existing talents. We will keep our focus on Heroku for this article. Heroku is one such product where a developer who knows how to commit a code can deploy a scalable application with a single commit.

Many of us might have heard about two pizza teams and how you should keep the number of team members responsible for a service to a minimum. With Heroku, you can start with just a couple of engineers and add more as needed. A lot of focus over the past few years has been on building a minimum viable product (MVP) made popular by the lean movement. The time period we are in is even more faster. Why just build an MVP when you can build a great application with a team focused on business need and spending less time on platform, infrastructure, and even DevOps?

Should You Orchestrate Containers?

Businesses no longer have to buy servers and install operating systems. If you are still doing that, it is like living in the era of CRT monitors and manual typewriters. Provisioning a server with the OS and all the libraries is just a click and a few seconds away. In a similar way, running scalable applications should not mean every business needs to learn how to deploy, orchestrate and manage the containers in every single cloud service provider. 

Heroku DevOps Support

With Heroku, a single commit  by git push heroku master‘  can deploy your Java, Ruby, Python, Go, Node and many other languages into scalable cloud infrastructure.

Heroku provides various DevOps support with its toolsets. Heroku has its own tools such as the ones listed below:

One of the core contributions of Heroku to the community is the buildpacks. Buildpacks make it easy to deploy applications. The auto-detection, build, and deployment of applications means you could literally deploy your code to a staging environment even without having all the SDKs installed locally. The point here is to emphasize the simplicity of buildpacks and not a recommendation that you develop without the right tools. If you have to make a small change to your code, you could even edit and commit directly from your Github and trigger a deployment via pipelines.

Heroku teamed up with Pivotal in 2018 to come up with Cloud Native buildpacks. More on this can be found at https://buildpacks.io/.

Productivity means developers focus on what is more important to satisfy the business need. If a business does not adopt the best practices, it only means that they get less benefit and more frustration out of their team.

Heroku is available in various regions and via pipeline you could easily deploy the app to multiple regions.

Heroku For Compliance

The businesses that require HIPAA compliance, PCI compliance and other compliances can make use of Heroku Enterprises. Heroku provides private spaces that are only accessible via VPN and within VPC. Also Heroku provides PrivateLink to AWS resources. Heroku itself runs on AWS servers and is available in many AWS regions. Heroku may or may not continue to run on top of AWS. But the commitment from Heroku to support VPC/VPN connections to other cloud providers’ resources and on premise seems strong.

Heroku Ecosystem

Since Heroku is a product owned by Salesforce, their obvious focus is on providing integration to Salesforce (CRM) platform and associated products. Salesforce provides Heroku Connect to synchronize data between Salesforce and Postgres database in an enterprise deployment.

Microservices in Heroku

You can run applications in microservices architecture within Heroku Private Spaces or in the much more affordable public spaces. It all depends on the type of the applications that you are running and the compliance need. If you are building applications in Java and are interested in Spring Boot, Spring Cloud Services you could look into JHipster. JHipster provides various tools to build and deploy applications to various cloud providers including Heroku. Spring Cloud relies a lot on the Netflix OSS tools. Netflix may discontinue a project or put it on maintenance mode. Spring cloud has its own release cycles and sticking to them will make sure that you are able to focus on your applications and not worry about figuring out which dependencies are compatible.

Where Do You Start

The focus of this article is to introduce readers to Heroku. You could start by building, deploying and running applications for free. You could choose the computing engine that suits your needs starting at $7 a month per container/dyno. 

The best place to start is the Heroku Dev Center. Choose your language, go through a quick tutorial, deploy your application, and feel the power. It is definitely a great experience to run and scale your application without having to worry about SSL, load balancing, OS, sdk, networking, orchestration or setting up monitoring. Heroku provides a web dashboard as well as a powerful CLI tool. You can also find a lot of addons at Heroku Elements Marketplace to enhance your application. A lot of them are available with a free tier as well. 

Do Not Stop There

A blog on Dev.to comparing AWS and Heroku is a quick read on how Heroku provides added value compared to AWS.
You could benefit from the power of running microservice apps on Heroku along with the other services out there. You do not have to restrict yourselves to the Heroku elements. You could combine the power of Heroku along with AWS and other cloud provider offerings. As an example Okta developer provides a free tier with up to 1000 monthly active users to get you started with authentication. In one of the applications that we developed, we leveraged AWS Lambda function to recognize and parse documents (OCR using tesseract), and another function to support zipping documents for download. The idea is that you could build an awesome application that costs less than a $1/month per user with the ability to scale as needed. There are other competing products such as Pivotal Cloud Foundry, but Heroku is a developer-friendly place for small and medium size businesses to get started.

Published: June 27, 2019


  • -

Liferay: In A Serverless and Microservices Era

In the era of microservices and serverless architecture, it is essential to evaluate if you need to build or buy a software. A decade ago, there was still a lot of push back against virtual machines (VMs) for production servers as they were considered the cause of performance issues. One easy solution at that time was to scale vertically by adding more vCPU and memory. That is not the case anymore. Organizations have not only adopted virtual machines, they are also moving towards containers and serverless. Applications don’t have to be built and deployed as a massive piece of software anymore.

Modular Monolith

As tech giants were sharpening their skills related to microservices, containers and serverless, most businesses were still struggling with modular monolithic applications. The reason they are called modular monolith is because the applications are written as modules but deployed as a monolith.

How can you tell if you are running a modular monolith? In order to figure that out, you should evaluate a few points. The first is that if you are upgrading the application, all the services will be down during that process. The second is that if you need to scale parts of your application, you have to scale the whole application server. One more indicator that you should pay attention to is that your application demands that you use certain programming languages and constructs.

Trend

As of now, many businesses have adopted or are considering their options with microservices and serverless architecture. They do have their own challenges when it comes to orchestration, monitoring and management. It is a different problem to deal with than with the traditional modular monolithic applications. Many tools make it easy to deal with microservices and serverless architecture. Several of them support a single CLI command to deploy the changes to any environment. In the serverless area, Amazon has its own AWS SAM CLI while Serverless, Inc has a serverless framework.

Cloud technologies adoption is becoming common in sectors such as government, healthcare and payment card industries.

What Are We Solving? Is There A Business Driver?

The goal is not to adopt technology for technology’s sake. The goal should be to solve the business need in a cost-effective and timely manner while meeting the demands of ever-changing requirements. We need agility and speed while simultaneously running a highly available application. It is important to keep our eyes on the goal. If a modular monolith solves your need really well, you may want to keep it. But let’s not allow the love for our legacy applications keep us away from innovating in business and technology.

How Do We Adopt?

Adoption takes time. It does not have to be an all or nothing approach. The best way to start is to explore the options for some parts of your application. If you have to parse documents, scan documents, analyze text or video etc., it may be better to externalize those to use the serverless offerings than to deploy and manage those services yourself. Traditionally applications will run command-line tools in the same application server to perform a whole bunch of tasks. The issue with this approach is that you are forced to scale the whole application layer when you have to scale these other processes that are competing for the same resources. A piecemeal approach may be a good starting point.

Is Liferay a Modular Monolith?

The above prelude is important for the following discussion. It is very likely that some of the readers may not even have heard about Liferay. Liferay is a Java based digital experience product that is primarily used to build intranets, customer portals, dashboards and public facing sites. As a product, Liferay has been around for more than 15 years. I have spent more than a decade with Liferay since I was introduced to it in early 2007.

Liferay has evolved over the past decade, yet it may fundamentally look the same when it comes to the deployment architecture. Let’s quickly go over a few points:

  • Search. A decade ago Liferay was using Lucene by default and supported Solr and other engines for search and index of documents. Now it uses elastic search by default while providing support for Solr.
  • Database. Liferay runs on a relational database, but you could develop applications (portlets) that use their own datasources. This remains the same since founding. Liferay did remove some of the supports for database sharding at application level.
  • Deployment Architecture.  In a high availability environment, Liferay instances are deployed as clusters with all the instances sharing the same database and data storage. Plugins that are developed can share the same database or connect to external services and databases. The major change for enterprise customers is that Liferay has recently started supporting the elastic licensing model where you could increase and decrease the number of instances while paying only for the additional time. In earlier versions the request for license was through a ticketing system. This has changed since the introduction and evolution of the Liferay Connected Services plugin.
  • Vertical Scaling. It is very common for Liferay installation to demand a more powerful application server configuration as a lot of heavy lifting happens at this layer. As an example, all the document parsing, conversion etc. happens in the application server and only the indexed document is pushed to the search engine. Also, all the content images that are being stored are rendered and cached at the application server layer. It is technically possible to externalize the cache with external caches.
  • Plugin Portlet Development. Portlets are generally written in Java or a few Java frameworks such as Spring MVC, JSF etc. In the latest version of Liferay, you can write portlets using Javascript frameworks such as React, Angular etc. Liferay supports bundling the JS application and deploying it to the server or running it as standalone JS application using Liferay remote services. It may be very convenient to bundle all the Javascript and deploy it to the application server. Deploying in such a manner means your application server is also the web server serving a lot of Javascript and CSS.
  • Services.  Liferay has supported exposing the remote services via SOAP, JSON APIs for a long time now.

Various aspects of Liferay have evolved over the past decade, yet it may fundamentally look the same at a high level. Liferay deployments resemble more of a modular monolithic application when it comes to scaling and upgrade. Is there a way to address some of the concerns?

Microsites As An Option

One way Liferay solves some of the scalability and availability needs is via microsite architecture. As an example, we can see how Liferay addresses their own needs. Liferay.com probably started as a single application server which later evolved into a cluster. Various needs for separation of content and access were provided through communities and memberships in a single application cluster. One disadvantage with this approach is that you have to scale the whole system vertically and horizontally to support the growing user base. Another major challenge with a single application cluster is that an upgrade will affect all types of users, sites, and communities.

One way to solve the scalability and upgradability challenge is to run separate clusters of various microsites such as help.liferay.com, web.liferay.com, partner.liferay.com, community.liferay.com, dev.liferay.com etc. They all are tied together via SSO, but exist independent of each other. Typically, if your departments are big, they may want to manage and upgrade their own microsites. This would result in multiple versions of Liferay running within the same organization. This could result in the creation of the very silos that organizations are wanting to prevent.  As we all know, the organization’s culture will reflect in the way the teams talk to each other.

Build Or Buy

If you have the capability to develop greenfield applications, you should definitely look into your options that are not constrained by a platform. Do you need a blog or are you trying to build a site like https://medium.com? Is your need for service easily fulfilled by an existing platform or do you have to customize the platform heavily?

It all depends on your business needs. If you are going to spend a significant amount of time and money to customize a product, it may be worth looking into building greenfield applications. Also you could take a hybrid approach where your application leverages functions as a service and other serverless features as needed. If your application has a need to export a lot of files as part of the regular use, it would be wise to run this zip process in an AWS Lambda function or as a separate microservice in an asynchronous way. Running such processes in an asynchronous way within the same application server may not be suitable for your use case. It is better to free up your application server resources so that it can better serve other requests.

What’s Next?

I wish I could cover everything in a single article. But that would become a modular monolithic article. I hope to cover more on this topic in future articles. If there is something that interests you specifically, please comment.

Published: May 22, 2019


  • -

OSGi Adoption and Liferay

Category : Development , Liferay , OSGi

Where Do We Start ?

The goal of this article is to provide a perspective of how relevant the adoption of OSGi in Liferay is to a business executive. Is it really worth the effort to learn OSGi? There are many ways to develop an application. The are numerous frameworks in various languages such as Java, JavaScript, Ruby, PHP, Python etc. If you are a firm that does not deal with Java technologies, OSGi is not for you. If you have a Liferay implementation or are considering one, continue reading.

A Little Bit of History

OSGi has been around since 1999. Liferay has invested more than 5 years in the OSGi technology. The last two years probably were the most intense of all when Liferay started migrating most of the core portlets and services. Until recently most of the Liferay developer community could have easily ignored OSGi and continued to develop plugins the old way. The recommended approach going forward is to use the OSGi bundle, though a legacy deployment may still be supported.

Why OSGi in Liferay?

This is what Ray Augé had to say over the years. Ray has been leading this effort and is the key player behind this implementation.

“Liferay is a large, complex application which implements its own proprietary plugin mechanisms. While Liferay has managed well enough dealing with those characteristics over its history, it’s reached a point on several fronts where these are becoming a burden which seem to bleed into every aspect of Liferay: development, support, sales, marketing, etc.” – Ray Augé October 11, 2012

“Liferay is a complex beast with an intricate network of features, so many features in fact that they occasionally have such indistinct lines such as finding the where one feature ends and another begins can be difficult…The number of benefits is almost too great to list. However, one of the greatest advantages can’t be discussed enough: Modularity.” – Ray Augé Feb 4, 2013

The primary reason Liferay adopted OSGi is to easily manage Liferay as a platform. It is to make things easier for core Liferay developers. The key benefit is the modularity of the OSGi platform. OSGi allows the end user to easily add/remove/enhance services and offerings dynamically. The hardware industry has been following a very modular approach that allows us to add and remove components easily. There is a software piece to it as well which recognizes those dynamic components. Not all software is developed in such a way. Also it should not be the responsibility of the software but rather the platform which should enable such modularity. A capable platform ensures that we don’t have to reinvent the wheel on every piece of code. This is one of the promises of the OSGi platform. If you can tell what your piece of code does and someone can tell what their piece of code needs, the platform can match that up for you while you are still up and running. The OSGi platform does offer various other benefits that you can read up on.

How Relevant is OSGi in Current Architecture?

If you are a business that is using Liferay or looking to use Liferay, it is important that the team invest the time to learn the basic concepts of OSGi. OSGi has a very dedicated group of members who have given all that they have to keep it more relevant. It is very likely that Liferay will continue to use OSGi for at least another 5 years (estimate based on what it took to adopt the OSGi technology). So the time spent on learning OSGi while using Liferay is not a waste.

OSGi provides the concept of µservices within a single JVM. Liferay primarily relies on this feature. In order to stay relevant with the modern cloud architecture and distributed services, there are various OSGi initiatives such as Amdatu that embraces cloud computing.

You could entirely develop a full fledged application using OSGi. Just like you would with Spring Boot and Angular JS or any other JavaScript, PHP, Ruby, or Python frameworks. We could put it this way. Instead of saying, “How relevant is OSGi?,” we can say that OSGi is trying to be relevant by making use of all the new technologies. One thing that the community may lack is the funding and hype similar to the some of the platforms.

Is OSGi the Only Way to Build Modular Applications?

OSGi is probably the only best way to build modular and dynamic application using Java within a single Java Virtual Machine (JVM). The key thing to note here is the single JVM. The way software architecture is evolving indicates that the applications are built in a more tolerant way so that you could remove a server and add it back in a matter a few seconds to a minute or two.

The concept of changing class or implementation within a JVM is not relevant if you are already a shop that knows how to build and deploy your application to an elastic cloud. At that level you are elastically scaling virtual machines in a more tolerant way.

Spring Boot along with PaaS providers such as Pivotal Cloud Foundry and Heroku are an alternative for those developing using Java. OSGi Enroute is trying to provide similar capabilities where you can bundle up your app as a jar file and run it anywhere. Along with the pipelines offered by some of the PaaS providers, nowadays it is as simple as committing the code and the rest is taken care for you.

If you are familiar with some of the javascript frameworks, they do a whole lot of things without having to worry about the class loading issues. In fact, Liferay themselves are working on a similar PaaS called WeDeploy. As of now WeDeploy is in Alpha. Liferay’s interest in providing such as platform clearly indicates the effort to stay relevant and diversify the risks.

It all depends on what you are looking for. If you are a platform or a tool builder, it makes a lot of sense to use frameworks like OSGi. If you are a already running tolerant applications in the cloud, the modularity offered by OSGi is something that definitely does not concern you.

A look at OSGi Adoption

Eclipse IDE is one of the most successful adopters of OSGi that a Java developer comes in contact with on a regular basis. Spring tried to support OSGi but later dropped Spring DM due to the complexity. Glassfish abopted OSGi but later the project was discontinued by Oracle. Liferay has taken the major step of adoption and successfully launched the major version. There is still a lot of work to do within Liferay, but Liferay does provide good support for those developing against it. OSGi has moved further on with OSGi enroute and various cloud computing offerings. Liferay’s primary focus in OSGi adoption is the capabilities within a single JVM. In my opinion, by doing so Liferay has committed itself and has become a major player in OSGi for web applications. A continued success and user adoption within the Liferay community could very well provide the oxygen that OSGi needs in the web application platform.

Liferay has done the toughest part of OSGi adoption in its platform. For the end users Liferay provides various utilities to interact easily with the OSGi service trackers etc. It is sufficient to just understand the basics of OSGi to develop plugins for the Liferay platform. Developers could deep dive into OSGi as needed while working with Liferay. This is in a way similar to how developers using Liferay interact with services without having to master Spring or Hibernate.

Bndtools and various others OSGi frameworks such Apache Felix , Amdatu etc., help it make easy for the developers. There is still a lot of activity primarily supported by the OSGi Alliance members which keeps the community strong even after 17 years.

Conclusion

There are many ways to develop rich applications. If you are a shop that has invested in Liferay, then getting up to speed with OSGi will make your job easier. If you are not a shop that is invested in Liferay or Java, you could live without ever knowing what OSGi is. The one thing that OSGi lacks is the hype and support from the extended Java community. As Liferay is trying to be more than a portal, as a business you need to think beyond any programming language or a platform and evaluate what is best for you. Technology changes so fast. Instead of adopting technology for the sake of it, you need to adopt it to solve your and your customers’ need.


  • -

Marketing, Did You Get Sold?

Category : Marketing and Sales

I have been trying to understand the difference between marketing and selling. I am especially interested in how this works in the IT industry. Surely, we all can relate to various things that we don’t need or want but end up buying throughout our life.
American Marketing Association (AMA) defines marketing as:
“Marketing is the activity, set of institutions, and processes for creating, communicating, delivering, and exchanging offerings that have value for customers, clients, partners, and society at large.” – American Marketing Association
This is how one of the former professor at Harvard defined selling:
“Selling concerns itself with the tricks and techniques of getting people to exchange their cash for your product. It is not concerned with the values that the exchange is all about.” – Theodore Levitt, Former Professor at Harvard Business School
Quite often we can relate the success or failure of a project to the difference between marketing and selling. We will review some of the pitfalls so that you can spot issues early on. At the end of the day, the buyer needs to make sure that they are getting value and satisfaction for their money.

Pitfalls

1. Just Launched a Site With Millions Of Users

If you hear someone saying that they just launched a site that will serve millions of users, you would imagine that it is the next Facebook. It may very well be. If it is and you are looking to build a similar system, you are in the right hands. In certain cases it may not be. What your million user need may be very different than what the system that was built delivers.

Say you have a need to build a site where millions of users may log in, search, and buy products. The need for this site is very different than the need for a site like health insurance or auto insurance websites. Both the sites may very well have the same number of users but the activity level and need  is very different. In the case of health insurance and auto insurance websites, once you set-up auto-pay for the next six months or a year, you may hardly login unless there is a claim or want to add another dependent. In case of the insurance industry, they may have the need to process and generate documents either on demand or offline which may be more taxing than generating an simple invoice.

Most IT executives are experienced enough to ask the right questions about concurrent users, active users, daily logins, hourly logins, peak, number of transactions etc. But there may be some who may just go with their impulse and buy a solution only to later realize that this is not what they wanted. It would be a few million dollars too late.

2. Delivered Solutions To A Major Company

We lend our ears as soon as we hear some of the major companies names. If one of the Fortune 500 companies adopted this technology, solution, or chose a vendor then they are the right ones for me. It would be very simple if that were the case. A vendor may very well be the preferred services provider for a company and deliver all the IT needs. This is great news. In some instances this is not the case. A vendor might have delivered a very small piece of a solution to a major company and use their name to win something that they cannot deliver. Asking the right questions is the key. The questions that are relevant to your project may be different than the ones that are relevant to others. Instead of going behind the big names, look to see if the vendor has what it takes to deliver what you need.

3. Look At Our Awesome Testimonials

basis2_testimonial

This is an awesome testimonial and any city that needs a new billing system for their city could consider looking into basis2. The one thing that this does not tell is how even basis2, the fifth attempt, was eventually deemed as a failure after a year. After a total of $49 million dollars and five attempts, the basis2 system sent some residential customers $331K utility bill compared to $97 for the previous month.

basis2_sample_bills_customer_received

The company that sold the solution was able to capitalize on a failed project. For those who are interested, you can read the reports from the city at the below links.

City of Philadelphia Chooses Basis2

City of Philadelphia Audit Report A Year Later

Things to Consider

1.If It Works, Keep It

How many times have you heard something like, “If it is working well for you, keep it?” Typically what you would hear from a salesman is you need everything that they sell. Quite often the commission is not delivered at the end of the delivery but much sooner. Many may know very well as to what happened with the Wells Fargo fallout recently and the pressure of sales target.

2. Never Shop Hungry

We all either know this or acknowledge the saying, “Never go shopping when you are hungry.” If you are in a desperate situation, it is likely that you will buy anything that you believe will satisfy the immediate need. A more proactive solution is to continuously assess what you have and plan for the future.

3. Don’t Feel Rushed

If you hear someone say that, “if you don’t decide now, it won’t be available later,” it means the seller is more in need than the buyer. If you like to be in control of what you are buying, never let external factors take that control away. The more I have taken the opportunity to think through what I am getting without rushing, the more satisfied I am. The key here is the satisfaction, even if you end up with something else later on. Let us not confuse due diligence with procrastination.

4. Trusted Partner

The difference between a trusted partner and a con man is as that of an anchor verses a fishing rod. One helps you from drifting away and the other one attracts you with a bait. If you find a trusted partner, continue to work with them.

 

Conclusion

There is a lot that can be written on this topic. Success or failure of a project starts with someone getting sold. If you don’t have a plan for your money, there are people who definitely have plans for them. We learn a lot through failed attempts. It is a much smarter thing to learn from others failed attempts than to insist on failing yourself.


  • -

Application Performance, A Never Ending Journey

Category : Development

Introduction

Application performance has always been a concern even as hardware and memory got cheaper over the past decades. Does it really matter when you can containerize pretty much everything and even run a container per user when they are online? The answer to that is yes. Dynatrace, Newrelic and other Application Performance Monitoring providers all have those markets covered as well. So, where do we start?

Tools

As the saying goes, a good craftsman never blames his tool. You could look at this in a couple of different ways. You make it work with what you have leveraging your expertise. You choose the right tool for the work when you have the option. Once the tool is chosen, complaining about it without the proper course or change of course of action only reveals the poor craftsmanship.

If all that you have is a rock, you could still make a spear out of it. It does not mean you ignore all the inventions and discoveries that came afterwards and get stuck between a rock and a hard place.

Design

No one has unlimited access to resources. Even if you have, it does not guarantee the performance. As explained by Amdahl’s law, the lowest performing link would limit your ability to achieve the output you want by merely scaling. It is just like trying to use a flour sieve to fill a container with water. Design your application for the task it is intended for.

As an example, if your application needs to process a lot of documents, parse them and index them, it may be better to scale that aspect of it without affecting the performance of the system. If you are buying a platform, see if the platform can easily support the scaling as shown below. If not, you need to plan for it with the tool that you have. Search engines themselves are capable of parsing, indexing and storing the documents.

Search Optimized Architecture

Quite often a system may support an external search engine but may not support externalization of document parsing. Depending on your use case, this may or may not be an issue.

Session vs Cache vs In memory Systems

Session was a great thing once upon a time. Everyone was excited about storing user interactions, data etc on a per user basis and replicating that across servers to provide a seamless failover. This architecture requires that you scale your servers vertically. Just like air fills the container, work expands to fill the time, the objects soon filled the memory. The advent of stateless architecture changed the way we viewed user interactions.

I have seen implementations where developers stored common datasets in a per user session. The data that was stored quickly filled the memory as more users logged in. A cache that can be shared across user sessions is a better option since most of that data was a read-only data. A lot of the systems are optimized at various levels. A database has its own caching mechanism. A second-level cache is supported by various systems. Some frameworks offer a third-level caching of complex objects. So, where do you start and stop? A one-size fits all may not be the option where the problems are unique.

I used the term In-memory systems to include a wide range of systems out there. It could be an in-memory database, search engine or any system that can be used to easily store and access data and has a persistent store from which you can rebuild the memory. This could be  a very good alternative to the other two means.

Upfront vs Continuous Process

Sometimes you have the luxury to continuously improve your system. If you have that luxury, that works very well with the Return On Investment (ROI). You could measure every optimization that you perform and keep improving.

You may not have that luxury sometimes. Your application needs to support certain use cases and a certain number of people. If it fails to do so on day one, it may fail to gain the trust of the users.

Design diligently. Perform the benchmark via load testing. The benchmark is a little trickier because you cannot expect the users to follow your load test use cases.

 

Performance Tuning

What about all the performance tuning, properties tweaking, environment settings adjustments etc? All these are useful but never substitute for a poorly designed system. You use the performance tuning as a checklist for the system that you are running. If your system was designed for a certain altitude, a mere tuning may not take you to the next height.

 

Conclusion

Don’t blame the tool. Use the right tool if you can find one. Don’t blame the design of a framework. Understand how it is designed and see if that is the one for you. If you are a team of experts and you know what you are getting into, you could make it work. If you are not a team of experts, find an expert.. Do the due diligence. If you would be happy to claim the success, don’t be afraid to take the blame and make it right.