Saturday, 25 November 2017

CISCO linksys router WRTG45GS Advance Settings and Description


LINKSYS Router:-





Wireless Router Advance Setting Description.


The Wireless screen allows you to customize data transmission settings. In most cases, the advanced settings on this screen should remain at their default values.




Authentication Type


The default is set to Auto, which allows either Open System or Shared Key authentication to be used. For Open System authentication, the sender and the recipient do NOT use a WEP key for authentication. For Shared Key authentication, the sender and recipient use a WEP key for authentication. If you want to use only Shared Key authentication, then select Shared Key.


Transmission Rate


The default setting is Auto. The range is from 1 to 54Mbps.
The rate of data transmission should be set depending on the speed of your wireless network. You can select from a range of transmission speeds, or keep the default setting, Auto, to have the Router automatically use the fastest possible data rate and enable the Auto-Fallback feature. Auto-Fallback will negotiate the best possible connection speed between the Router and a wireless client.


Basic Rate


The default value is set to Default. Depending on the wireless mode you have selected, a default set of supported data rates will be selected. The default setting will ensure maximum compatibility with all devices. You may also choose to enable all data rates by selecting ALL. For compatibility with older Wireless-B devices, select 1-2Mbps.


CTS Protection Mode


The default value is set to Disabled. When set to Auto, a protection mechanism will ensure that your Wireless-B devices will connect to the Wireless-G Router when many Wireless-G devices are present. However, a performance of your Wireless-G devices may be decreased.


Frame Burst


Allows packet bursting which will increase overall network speed.

Beacon Interval


The default value is 100. Enter a value between 1 and 65,535 milliseconds. The Beacon Interval value indicates the frequency interval of the beacon. A beacon is a packet broadcast by the Router to synchronize the wireless network.


RTS Threshold


This value should remain at its default setting of 2347. The range is 0-2347 bytes.

Should you encounter inconsistent data flow, only minor modifications are recommended. If a network packet is smaller than the preset RTS threshold size, the RTS/CTS mechanism will not be enabled. The Router sends Request to Send (RTS) frames to a particular receiving station and negotiates the sending of a data frame. After receiving an RTS, the wireless station responds with a Clear to Send (CTS) frame to acknowledge the right to begin transmission.


Fragmentation Threshold


This value should remain at its default setting of 2346. The range is 256-2346 bytes. It specifies the maximum size for a packet before data is fragmented into multiple packets. If you experience a high packet error rate, you may slightly increase the Fragmentation Threshold. Setting the Fragmentation Threshold too low may result in poor network performance. Only minor modifications of this value are recommended.


DTIM Interval


The default value is 1. This value, between 1 and 255 milliseconds, indicates the interval of the Delivery Traffic Indication Message (DTIM). A DTIM field is a countdown field informing clients of the next window for listening to broadcast and multicast messages. When the Router has buffered broadcast or multicast messages for associated clients, it sends the next DTIM with a DTIM Interval value.  Its clients hear the beacons and awaken to receive the broadcast and multicast messages.


AP Isolation


Creates a separate virtual network for your wireless network. When this feature is enabled, each of your wireless client will be in its own virtual network and will not be able to communicate with each other. You may want to utilize this feature if you have many guests that frequent your wireless network.



Wireless MAC Filters


The Wireless MAC Filters feature allows you to control which wireless-equipped PCs may or may not communicate with the Router's depending on their MAC addresses. To disable the Wireless MAC Filters feature, keep the default setting, Disable. To set up a filter, click Enable, and follow these instructions:

1.      If you want to block specific wireless-equipped PCs from communicating with the Router, then keep the default setting, Prevent PCs listed from accessing the wireless network. If you want to allow specific wireless-equipped PCs to communicate with the Router, then click the radio button next to Permit only PCs listed to access the wireless network.
2.      Click the Edit MAC Filter List button. Enter the appropriate MAC addresses into the MAC fields.

Note: For each MAC field, the MAC address should be entered in this format: xxxxxxxxxxxx (the x's represent the actual characters of the MAC address).
3.      Click the Save Settings button to save your changes. Click the Cancel Changes button to cancel your unsaved changes. Click the Close button to return to the Advanced Wireless screen without saving changes.


Wireless


Mode

If you have Wireless-G and 802.11b devices in your network, then keep the default setting, Mixed. If you have only Wireless-G devices, select G-Only. If you want to disable wireless networking, select Disable. If you would like to limit your network to only 802.11b devices, then select B-Only.


SSID

The SSID is the network name shared among all devices in a wireless network. The SSID must be identical for all devices in the wireless network. It is case-sensitive and must not exceed 32 alphanumeric characters, which may be any keyboard character. Make sure this setting is the same for all devices in your wireless network. For added security, Linksys recommends that you change the default SSID (Linksys) to a unique name of your choice.


SSID Broadcast

When wireless clients survey the local area for wireless networks to associate with, they will detect the SSID broadcast by the Router. To broadcast the Router SSID, keep the default setting, Enable. If you do not want to broadcast the Router SSID, then select Disable.


Channel

Select the appropriate channel from the list provided to correspond with your network settings, between 1 and 11. All devices in your wireless network must use the same channel in order to function correctly.


Check all the values and click Save Settings to save your settings. Click Cancel Changes to cancel your unsaved changes.



Thursday, 23 November 2017



How to Develop a Holistic Cloud Data Management Strategy –


Actifio Briefing Note





Many organizations have identified the public cloud as an important tool in their data protection and data management strategies. Vendors are quick to jump on this interest. The problem is most of the solutions vendors provide are fairly myopic and only use a part of the cloud capabilities in their solutions. IT needs to look for a solution that enables a more holistic use of the public cloud.

A Public Cloud Capability Inventory

Most organizations, as they start their cloud journey, look at the cloud as a giant digital dumping ground. They store backups and maybe even archive data to one type of cloud storage at one provider. This dumping ground use case ignores the fact that there are multiple cloud providers, each with different types of storage tiers and, of course, there is a vast amount of processing resources available to act on the data they move to the cloud. Also, different cloud providers are developing expertise in certain areas, some are better at video and audio processing, others at machine learning and analytics, other still are known for airtight security.

To tap into this inventory of capabilities IT needs to equip the organization with tools that allow for not only the movement of data to and between clouds, but also the ability to move data in its native form and in some cases, even transform that data so it is ready to run in the destination cloud.

Data Protection is Just a Starting Point

There is a reason data protection is the number one task organizations use to begin their cloud journey. Cloud backup, and especially disaster recovery as a service (DRaaS), simplify most difficult to manage processes in the data center. And data protection should continue to be the first step, but IT needs to be careful the solution isn’t so myopic that as the organization looks for other cloud use cases it can’t leverage the data protection process to enable them.

DRaaS is certainly a step in the right direction. It expands the cloud backup use case by enabling a customer’s virtual machines (VM) to run in a provider’s cloud in the event of a disaster. In this case, the cloud is used for more than just storage. These solutions leverage cloud compute to instantiate those VMs.

To be more than just data protection, the solution has to provide several key capabilities. First, it has to, obviously, support multiple cloud providers like Amazon, Google, Azure and others. It also has to support private cloud storage (object storage).

Second, the solution needs to move data in its native format, not a proprietary backup format. Storing data in a backup format means time is required to move the data out of the format into another format.

Third, the solution needs to be cloud-tier aware. Each of the major cloud providers has at least three tiers of storage; a high performance but expensive tier, a more affordable but less performing middle tier, and a cold tier that is cost effective but slow to retrieve data from. IT needs the flexibility to use more than one tier depending on the use case. The solution should enable them to quickly move data between tiers based that need.

Finally, the solution needs to be multi-cloud aware. The ability to move data between clouds is becoming increasingly important. It enables the organization to leverage specific capabilities of a particular cloud provider or to provide redundancy against cloud failure.

Introducing Actifio 8.0

Actifio is a software-based solution that provides enterprise data-as-a-service. It enables the instant use of data across data centers and multiple clouds. It also enables near real-time protection of data, tracking all those changes, to provide near-instant rollback from cyber-attacks or disasters. Finally, it eliminates the uncontrolled proliferation of copy data by creating what it calls a “Virtual Data Pipeline.”

It works by installing a virtual or physical appliance on-premises and after the initial copy of data is created, new writes are split as data is sent to primary storage. Essentially, the primary copy is updated incrementally forever. The Actifio Virtual Data Pipeline then manages this copy of data and provides read/write virtual copies of data to various processes like analytics, test-dev and DR.

In its 8.0 release, Actifio raises the bar. First, it provides native, multi-cloud support. Actifio can copy data to and from on-premises storage to a wide variety of public cloud providers including Amazon AWS, Google Cloud Platform, Microsoft Azure, IBM Bluemix and Oracle Cloud.

More importantly, it places that data in those respective clouds in a native format so the applications running in those clouds can instantly access it. As it does with on-premises data, an Actifio instance in the cloud can manage a single cloud copy and then present virtual images to cloud based applications.

The 8.0 release also improves Actifio’s cloud mobility capabilities. It can now convert from physical server to a VMware VM’s VMDK to an Amazon AMI. At that point, the VM can run in the Amazon cloud. The solution is not limited to Amazon, it can provide this capability for all of its supported clouds.

In the Amazon use case, Actifio can also convert back to VMDK. For example, a VMware VM could be pushed to Amazon AWS, run for a period of time, a seasonal peak, then be converted back to a VMDK and moved back on-premises.

With data now potentially spread across several clouds and organization owned data centers, knowing what data is where is a big challenge. Fortunately, the 8.0 release provides a global catalog, creating a common metadata index of all data regardless of data. Now multi-cloud organizations can find their data no matter where.

Finally, the 8.0 release unveils Actifio’s Cloud-based customer success. This is an always-on customer engagement platform for monitoring, supporting and resolving the customer’s Actifio architecture. The solution correlates analytics, not actual data, to provide a community approach to identifying and solving problems.

StorageSwiss Take

Organizations are understanding the value of the data they create and collect. As a result, they are storing more data. And that data needs to be operated on by more than the user, application or location that created it. Instead, that data needs to be made available to multiple locations, clouds and other compute services.

Actifio created one market, copy data management, then evolved into a data as a service company based on how enterprises were utilizing the technology. It enables organizations to leverage the data they create and then move that data to the platform or cloud that makes the most sense to operate on that data. 8.0 is a significant step toward unshackling data, while at the same time curtailing its growth so that organizations can extract more value than ever from the data that they store.



Friday, 10 November 2017

Tape Capacities Show No Sign of Slowing Down


LTO Generation:-

The LTO consortium announced the availability of LTO-8, the next generation of LTO tape technology.  LTO is run by three technology provider companies (TPCs), namely HPE, IBM, and Quantum.  Since the first products were released in 2000, LTO media has increased from 100GB capacity with LTO-1 to 12TB on LTO-8.  The future shows a roadmap of cartridges capable of holding half a petabyte of compressed data within another decade.

Timeline of the growth of LTO media from 2000 to 2017





LTO Roadmap


Figure 1 shows the LTO timeline of product releases, with capacities and throughput.  The right axis shows capacity, scaling from 100GB to 12,000GB with LTO-8.  I’ve drawn the graph with a logarithmic scale because the early products don’t show on the graph at all.  Figures are quoted in GB because the graph would revert to negative numbers when using TB capacities (100GB = 0.1TB, but is negative on a log scale).  The right axis shows throughput in MB/s, from 20MB/s initially, to 300MB/s with LTO-8.  Again this scale is logarithmic.

We can see straight-line growth improvements in capacity from the technology, to the point you can almost draw a line with a ruler across the data points.  Throughput has been more challenging, with modest improvements until the jump at LTO-6.  LTO-9 onwards (where the figures are projections rather than actual) show bigger jumps in performance.  There are two straight-line leaps to around 1100MB/s.

LTO Future

The increases in capacity continue for LTO-9 onwards, with a commitment to LTO-11 and LTO-12 generations that weren’t on the previous roadmap.  LTO-12 will have a raw capacity of 192TB (480TB with 2.5:1 compression) and throughput of 1100MB/s.  The idea of being able to store half a petabyte of data on a single cartridge seems hardly imaginable to where the LTO project originally started from.

One of the interesting aspects to LTO and tape continuing to develop at such a rate is the way in which hard drive technology gets incorporated into tape over time.  LTO-8 drives, for example, use TMR heads (tunnel magnetoresistive) rather than GMR.  TMR was originally introduced into disk drives around 2004.  So tape (not just LTO) benefits from the development work done in the hard drive industry.

LTO-8-M

As a small bonus, the new LTO-8 drives will accept new (unused) LTO-7 cartridges and provide 50% additional capacity over an LTO-7 drive.  This capability is being called LTO-8 Type M and is aimed at easing the transition from LTO-7 to LTO-8 for customers who have already invested in LTO-7 media.  So LTO-7 media (typically 6TB) will store 9TB when used in LTO-8 drives.

Changing Role of Tape

When LTO was first introduced, tape was a mainstay of the backup world.  In the 1990’s we saw huge tape silos from vendors like StorageTek that used tape for both backup and archive.  In most cases, archive wasn’t really a thing, but just a collection of historical backups from which data was restored.  The industry has moved on, with dedicated backup appliances now replacing the first generation of disk-based backup storage.  It’s much more practical to use disk for backup, so tape is being positioned more as an archive technology.

On pure media costs alone, tape is way cheaper than disk and a fraction the cost of using online cloud services like S3.  Obviously, TCO includes drives, libraries, software, people as well as media.  So we can’t just look at the cost of a single tape.  However, in a well structured archive, the cost of the media becomes the incremental costs in managing more capacity.  So as an archive scales, the $/GB cost continues to reduce.

Gateway to Tape

Tape is of course purely the medium for storing data.  We need a way to get data on and off tapes and that’s where we see a challenge.  In a recent Storage Unpacked podcast, Martin and I talked about some of the challenges, like using LTFS for format independence.  We also discussed Black Pearl from Spectra Logic.  The Black Pearl appliance is effectively a cache in front of one or more Spectra tape libraries.  It manages the translation of API calls based on AWS S3 into storing data onto tape media.

S3 and object storage, in general, is seen as a great way to archive content, however, using disk at large scale (or even S3) may not be cost-effective.  Large amounts of data in an archive can be inactive, making the cost of storing on disk an expensive one.  AWS itself is probably using tape for Glacier because the access times for content are so long.  This is, of course, reflected in the cost of the service.

Not sure that the object storage vendors have fully embraced tape yet.  To fully scale, object stores will need to support tape and in a way that makes it flexible and easy to use.

Shiny fun stuff tends to get the news in storage (and probably all of IT).  However, storage has always had a cost/performance/capacity balancing act to achieve.  Data Tiering has been around forever and there’s no reason to not include tape.  While backup may be better served by disk, long-term retention of data suits tape well.  This could be for compliance or as part of an active archive.