Customer Information
It's all about Uptime!
 

FAQs

How is Bandwidth burst and overage calculated?

Using the 95th percentile for calculating the bandwidth usage, means the bandwidth assigned to an account (in this case 2Mbps) is allowed to go three times as high as the base bandwidth for 36 hours per service month without incurring any additional charges. The 36 hours does not have to be continuous, but is calculated throughout the month. I.e., one day you might have 2 hours of spiked bandwidth, but not again for another week; or several days in a row of spiked bandwidth for only 15 minutes per day. Samples are based on the average bandwidth usage in 5 minutes. (Simply bytes/time where the time is 5 minutes.) All the samples are sorted and the highest 5% is removed. The highest remaining is the 95% percentile. Each direction is treated separately. The highest 95th percentile of the two (in and out) is what is billed.

The overage calculation is also monitored and typically not billed the first time there is a sustained burst usage. We try to make sure that our customers are in the right plan for their needs, and will put an addendum to contract plans in place adding base bandwidth rather than letting customers keep getting hit with overage charges.

What is Multihomed BFP Important to have as a Data Centers?

Configuring a redundant link to the Internet will give you an improved service and reduced outages and related costs. This strategy also offers network administrators peace of mind as a bonus. Here is how to use Border Gateway Protocol (BGP) to produce similar results for your company.

With the proliferation of VPNs, e-commerce, and a multitude of other crucial Internet applications, access to the Internet has become mission critical for many organizations, and Internet connection redundancy is vital to ensuring the availability of these applications. Assisting in the justification for Internet connection redundancy is the decreasing cost of corporate Internet access.

BGP is one of the key tools for achieving Internet connection redundancy. When you connect your location to two different Internet service providers, it is called multihoming. When you multihome your network to two different ISPs, BGP runs on your Internet router(s) and provides redundancy and network optimization by selecting which ISP offers the best path to a resource.

If you lose the connection to one of your Internet service providers, BGP’s keep alive packets will time out, and that BGP neighbor (from that ISP) will become down. Those routes will be removed from the BGP table, and thus, from your router’s routing table. Then, with only one set of routes in your BGP table, that provider’s routes (the secondary/redundant provider) are marked as “best” and placed in your routing table.

Normally, there will be some paths from one provider that are shorter than the other ISP, and vice versa. So your traffic will be distributed to the provider with the best AS path for whatever networks are advertised. However, if you are sending more traffic to a certain network (through one provider) than your link to that provider can handle, the extra traffic will not be distributed onto your second link. Using BGP metrics, you can attempt to do different forms of load distribution, but there is no real form of BGP load balancing.

Who needs to understand BGP?

BGP is relevant to network administrators of large organizations which connect to two or more ISPs, as well as to Internet Service Providers (ISPs) who connect to other network providers. If you are the administrator of a small corporate network, or an end user, then you probably don’t need to know about BGP.

BGP basics

  • The current version of BGP is BGP version 4, based on RFC4271.
  • BGP is the path-vector protocol that provides routing information for autonomous systems on the Internet via its AS-Path attribute.
  • BGP is a Layer 4 protocol that sits on top of TCP.
  • Peers that have been manually configured to exchange routing information will form a TCP connection and begin speaking BGP. There is no discovery in BGP.
  • Medium-sized businesses usually get into BGP for the purpose of true multi-homing for their entire network.
  • An important aspect of BGP is that the AS-Path itself is an anti-loop mechanism. Routers will not import any routes that contain themselves in the AS-Path.
Why do you need to understand BGP?

When BGP is configured incorrectly, it can cause massive availability and security problems, as Google discovered in 2008 when its YouTube service became unreachable to large portions of the Internet. What happened was that, in an effort to ban YouTube in its home country, Pakistan Telecom used BGP to route YouTube’s address block into a black hole. But, in what is believed to have been an accident, this routing information somehow got transmitted to Pakistan Telecom’s Hong Kong ISP and from there got propagated to the rest of the world. The end result was that most of YouTube’s traffic ended up in a black hole in Pakistan.

More sinisterly, 2003 saw a number of BGP hijack attacks, where modified BGP route information allowed unknown attackers to redirect large blocks of traffic so that it travelled via routers in Belarus or Iceland before it was transmitted on to its intended destination.

Why is Disaster Recovery Important to a company, and the History behind it?

Recent research supports the idea that implementing a more holistic pre-disaster planning approach is more cost-effective in the long run. Every $1 spent on hazard mitigation (such as a disaster recovery plan) saves society $4 in response and recovery costs

As IT systems have become increasingly critical to the smooth operation of a company, and arguably the economy as a whole, the importance of ensuring the continued operation of those systems, and their rapid recovery, has increased. For example, of companies that had a major loss of business data, 43% never reopen and 29% close within two years. As a result, preparation for continuation or recovery of systems needs to be taken very seriously. This involves a significant investment of time and money with the aim of ensuring minimal losses in the event of a disruptive event.

Disaster recovery developed in the mid- to late 1970s as computer center managers began to recognize the dependence of their organizations on their computer systems. At that time, most systems were batch-oriented mainframes which in many cases could be down for a number of days before significant damage would be done to the organization.

As awareness of the potential business disruption that would follow an IT-related disaster, the disaster recovery industry developed to provide backup computer centers, with Sun Information Systems (which later became Sungard Availability Services) becoming the first major US commercial hot site vendor, established in 1978 in Philadelphia.

During the 1980s and 90s, customer awareness and industry both grew rapidly, driven by the advent of open systems and real-time processing which increased the dependence of organizations on their IT systems. Regulations mandating business continuity and disaster recovery plans for organizations in various sectors of the economy, imposed by the authorities and by business partners, increased the demand and led to the availability of commercial disaster recovery services, including mobile data centers delivered to a suitable recovery location by truck.

With the rapid growth of the Internet through the late 1990s and into the 2000s, organizations of all sizes became further dependent on the continuous availability of their IT systems, with some organizations setting objectives of 2, 3, 4 or 5 nines (99.999%) availability of critical systems..This increasing dependence on IT systems, as well as increased awareness from large-scale disasters such as tsunami, earthquake, flood, and volcanic eruption, spawned disaster recovery-related products and services, ranging from high-availability solutions to hot-site facilities. Improved networking meant critical IT services could be served remotely, hence on-site recovery became less important.

The meteoric rise of cloud computing since 2010 continues that trend: nowadays, it matters even less where computing services are physically served, just so long as the network itself is sufficiently reliable (a separate issue, and less of a concern since modern networks are highly resilient by design). ‘Recovery as a Service’ (RaaS) is one of the security features or benefits of cloud computing being promoted by the Cloud Security Alliance.

Disasters can be classified into two broad categories.

The first is natural disasters such as floods, hurricanes, tornadoes or earthquakes. While preventing a natural disaster is very difficult, risk management measures such as avoiding disaster-prone situations and good planning can help.

The second category is man made disasters, such as hazardous material spills, infrastructure failure, bio-terrorism, and disastrous IT bugs or failed change implementations. In these instances, surveillance, testing and mitigation planning are invaluable.

Why does a generator "Power Rating" matter to a data center?

There are Three (3) types of Generators… Be sure to ask which types data centers use.

  1. Standby Power Rating: Standby power rated generators are the most commonly rated generator sets used.Their primary application is to supply emergency power for a limited duration during a power outage. The typical rating for a standby engine should be sized for a maximum of 80% average load factor and has a 200 hours per year maximum run‐time.
  2. Continuous Power Rating: Continuous power rating is used in applications where a constant load is used. Continuous rated generators are designed for continuous operation and DO NOT have an yearly maximum run‐time.
  3. Prime Power Rating: Gen’s used by Fiberpipe! Prime power rated generators can be used in applications where the user does not purchase power from a public utility. Utility companies (ie Idaho Power) typically will use Prime generators. Prime power generators are designed for varying loads and continuous operation and DO NOT have a yearly maximum run‐time.