Tuesday, October 25, 2016

What is DDoS ?


Friday, October 14, 2016

Can Your Website Be Found ?

Your website acts as the face of your brand online and making sure that it can be found should be your top priority. Domain Name System (DNS) is the directory system of the internet and is crucial to the performance and security of your website. You can have a state-of-the-art website, but if DNS queries for your website are not resolving to the correct IP addresses, customers will not be able to find your website. Even worse, your customers could end up on the wrong, potentially malicious website.
Because of DNS’ always-on, but behind-the-scenes operational nature, the security of DNS is often overlooked. In this blog post we’ll explore how our authoritative DNS product, Verizon ROUTE, ensures availability and accuracy of your DNS records, while also ensuring fast resolution of your DNS queries.

Is Your Website Available ?

Imagine someone sitting in their parent’s basement, spending $10 to bring down your million dollar online business. Pretty shocking, right? Unfortunately, this scenario plays out regularly. According to Verizon’s Data Breach Investigations Report (2014), attackers can rent a botnet to mount distributed denial of service (DDoS) attacks for less than $10 an hour. And a study by Kaspersky Lab found that just one DDoS attack can cause losses ranging from $52,000 to $400,000.

Read more: https://www.verizondigitalmedia.com/blog/2015/11/is-your-website-available/

Saturday, August 25, 2012

Managing Distributed Teams - Lessons Learnt

I have been involved in several big software development projects that used distributed international teams. Many of those projects were on the track of failure, till lessons were learnt and things were changed. It is not uncommon for such projects to be over budget and off track in terms of features and timeline. So, here are some of those issues I've noticed:

1. Total ignorance of the potential issues with distributed teams: Managers and team leads often don't spend any time discussing the potential issues that could arise because of the distributed nature of the team. They are over confident that every thing would work just fine, so let's jump into it and make it work. If other companies are doing it, why can't we ? In a few weeks, the reality hits them, invariably, like a slap on the face. Then they look for scapegoats, make everybody's lives hell and screw up the project.

2. Not having an offshore manager with onshore experience: A person who doesn't have a social security number can never fully understand why SSNs are kept in secure databases. So the leader of your offshore team, must have experience working in the country where the software application would be used. Besides features and requirements, there would be a million cultural, communication and process issues that would arise with offshore projects. So if the team doesn't understand what you are talking about, you are in trouble. No amount of documentation would help in this case. You need somebody who has worked in your country/company/team to act as the liaison with your offshore team.

3. Using old school PMBOK waterfall nonsense:  Innovative software company all over the world have moved away ( or are moving away ) from the traditional waterfall project management and the overhead associated with the PMBOK methodology. If you use such old school process for managing a distributed offshore projects, you projects are practically doomed. Before it hits you, your projects would be damaged beyond repair due to lack of regular feedback, lack of quality and other factors. So you must use an Agile approach. Keep the feedback loop as short as possible and put your product in front of users frequently. Minimize the documentation overhead and process overhead. There are enough risks with an offshore engagement as it is, you don't want your project management process to make things worse.

4. Not having a project management process at all: I've already recommended an agile PM approach above. Must mention that lack of any project management process would kill an offshore project too. Don't think of project management as a way to keep tabs on people. Think of it as a way to facilitate smooth development,  to lay out clear priorities, to detect problems earlier, to handle major challenges by breaking those down into chunks and to empower the team in general. In an offshore project, laying out clear priorities might be one of the most important aspects of project management. So find a way to do that. I've used the in-house priority scores and similar scores used by software such as RallyDev.

5. Starting offshore engagement with a major projects ( vs a smaller low risk project ): If your company has no experience with distributed/offshore engagement, and you start on that with a big and major project, that project is going to be in trouble. Start the offshore engagement with a small low risk project, learn your lessons and then think about offshoring bigger projects.

6. Not having quality measures in place: You must have code quality measures in place. All the common sense things that good software developers advocate ( automated code quality checks, continuous integration, code reviews, architecture reviews, frequent performance tests, unit testing etc ). are even more important with distributed/offshore projects. People in offshore teams might have a totally different understanding of quality than you do. So you need to make the expectations and the process clear from the beginning.

7. Not treating offshore teams with respect: Yes, this happens often. Errors are blamed on the invisible offshore teams, offshore guys are always expected to be available for meetings, their holidays are not respected, enough guidance and training is not provided and unnecessary delivery pressure in put on them. Remember offshore guys are humans just like you, they make mistakes just like you and they are probably working harder than your onshore team. Also remember, offshoring was your idea. So treat the offshore guys with the same respect and patience that you would grant to your onshore team. You probably need a bit more patience to deal with offshore teams. So keep that in mind, otherwise things would backfire.

8. Not getting involved in the selection of offshore team members: Often onshore managers have little control over the team members that are selected for the offshore work. At other times, managers just take offshore vendors on their word. I think that is a mistake. You should consider the offshore team as an extension of your onshore team and pay attention to the skills and experience of the offshore team members. Training and mentorship can sometimes overcome the skills and knowledge issues, but  you often won't have the luxury to train or mentor offshore teams members. So it is even more important that you evaluate their profiles and if possible interview them, before hiring them to build your products.

There are more things.....I'll add those gradually.

Sunday, May 1, 2011

Google Unleashes the Panda

The big news in the search engine world these days is the search algorithm change implemented by Google over the last 3 months, a project that Google code named Panda. If you are not a part of the Internet business world or the geeky web development community, you probably didn't notice it. Google regularly changes its algorithms to improve search results. But if I may use VP Joe Biden's tactful comments to describe this particular change: "It is a big F***ing deal".

Search engine ranking of some big name websites have gone down ( and have gone up for some other websites ). As a result search engine traffic and the associated revenues of those websites have plummeted. People are panicking and I've met some of them.

How did it all start? Apparently J.C Penney spoiled the party for everybody. At least that is how it began, before other people were dragged through the mud by the big daddy of search. According to a NY Times story published in Feb 2011, J.C Penney used dubious search engine optimization (SEO) techniques to game the Google search rankings. Basically, the SEO company hired by J.C Penny published thousands of non-contextual links to J.C Penney's website on websites set up just for the purpose of fooling the search engines. The more inbound links a website has on the Internet, the more important it is considered by the search engines. Due to these changes, J.C Penney ranked very high in Google search results over the holiday season and they made big bucks. But Google got a wind of it and also found many other companies employing such practices (commonly called black hat SEO practices). So Google changed their search algorithm in Feb 2011 to punish such sites. Along with J.C Penney many other sites lost their rankings in the Google search results. Here is a list of major websites that lost their Google rankings: Panda-Losers.

A second wave of Panda was unleashed in April 2011, this time globally.

Biggest Losers slapped around by Panda are the websites termed as "content farms" in SEO/Internet Lingo. According to Wikipedia: "In the context of the World Wide Web, the term content farm is used to describe a company that employs large numbers of often freelance writers to generate large amounts of textual content which is specifically designed to satisfy algorithms for maximal retrieval by automated search engines. Their main goal is to generate advertising revenue through attracting reader page views as first exposed in the context of social spam."

Many well known brands such a eHow.com, Hubpages.com, ezinarticles.com, about.com, associatedcontent.com, encyclopedia.com etc have suffered badly at the hands of Panda. Google on the other hand is saying the end users have been reporting much better search results after the Panda upgrade.

In a nutshell, this has been the big news in the search engine world in the last 2-3 months. So what advice does Google give to improve your search engine rankings after the Panda upgrade ? Well it is basically the same advice it used to give before: "Create a valuable website for the Users (with hi-quality content) and the search engines would show you love too. Follow our quality guidelines and stick to white hat SEO. Also don't try to game us, we can unleash more KungFu Pandas to kick your behind."

Wednesday, April 20, 2011

Technical Debt

Wikipedia defines technical debt as "neologistic metaphor referring to the eventual consequences of slapdash software architecture and hasty software development." In general, I consider technical debt as all the things in software development ( architectural design, refactoring, unit testing ), that are normally considered peripheral to the development of the main business functionality. All these things are needed to ensure long term quality of the software product.

It is not unusual to see a tug-of-war between product management and engineering about how much can be done in a given period of time. Engineering teams often find it hard to communicate the technical debt that they have to pay for the required changes. Try explaining unit testing, automated testing, refactoring and other such forms of technical debt to business users. But these things are very important for the long term health of the critical software systems. So the technical debt affects the business in the long run.

Often the technical debt accumulates and has to be paid by drastic measures such as stopping all future releases till the required clean up of the code base has been done. I think the term "debt" in technical debt is just awesome. Just like financial debt, if the periodic payments of technical debt are not made, serious consequences ensue. So how can the product managers avoid the painful drastic measures that are almost guaranteed, unless the technical debt is paid on a regular basis ?

Marty Cagan offers a simple suggestion in his book "Inspired". He recommends product managers to allow the engineering teams to use up to 20% of their capacity for maintenance or any other technical tasks that the engineers deem required to ensure long term health of the software systems. Marty says: "They ( engineers ) might use it to rewrite, re-architect or re-factor problematic parts of the code base, or to swap out database management systems, improve system performance - whatever they believe is necessary to avoid ever having to come to the team and say, 'we need to stop and rewrite'. If you are in really bad shape today, you might need to make this 30% or even more of the resources ".

I like this recommendation a lot. But based on my experience I think it can be very hard to get that magic 20% capacity on a regular basis, considering the constant urgency to deliver business functionality faced by teams.

Marty explains that many companies, that don't pay the technical debt on a regular basis, often face existential crises. He gives the example of eBay, Friendster and Netscape. At some point, engineering teams of these companies told the product managers that they can't deliver any more features and that they have to stop and rewrite the whole code base which was in a mess. eBay came to near collapse because of such situation in 1999. Friendster and Netscape were never able to recover.

Bottom line is, it is in the long term business interest of a software driven company ( which most companies are these days ), to pay the technical debt on a regular basis. Allocating 20% of the engineering capacity to do that, is a very reasonable recommendation.