Archive for the Tag DataONTAP
There is a demo on January 14 that will go over the basics of N series and Data ONTAP, and more. Here is the description along with the link to sign up.
WHEN: Thursday, January 14th, 10-11:30am CST.
PRESENTED BY: Gary Sewell
The topics that will be discussed during this N series presentation are:
1. Simplifying Data Management
2. Storage Efficiency
3. Protecting mission critical business applications (Oracle, Exchange, SQL, VMware & SAP) better than our competitors
4. Most importantly, see how we recover these applications in a matter of minutes!
Posted by richswain in Jan 04, 2010, under News
Welcome back from the holiday season! I am sure some of you worked through the holidays and hopefully most of you had time to spend time with friends and family the past few weeks. I have been doing some of both, IBM doesn’t shut down for the holidays but I was able to spend more time at home with my family just the same.
As we look forward to the first year of this decade, storage admins are facing new issues with conserving storage. The footprint is (and has been for some time) getting larger, we are keeping data longer, and everyone wants more space for less money without a performance penalty. I am sure you seeing similar issues and more in your own data centers, but what are storage vendors doing to help you solve these issues?
If you take a look at your storage plan today, does it match your business plan or a strategic plan? If your company has a strategic plan in place how can you look at that and gauge the storage plan? You may not have a plan in place today but that doesn’t have to stop you from looking 6 months ahead.
A couple of factors that can help you put a plan together.
1. Where is your data today? Clients are always looking to put the right data on the right system. Look for the benefits of both hardware (cache, processor, HBA, NICs) and software (snapshots, mirroring, data locking) to match a data type to the system. Does your Windows File server really need to run over the Fibre Channel SAN?
2. Is it on the right disk type or speed? Different disk types have different disk I/O measurements. Look at how your SATA vs FC vs SAS drives match the data set. Are you using high end disk for data that is hardly being used?
3. Can you be saving space by de-duping your data? We all multiple copies of the same files out there on storage. Why keep multiple copies of the same files/blocks if you can de-dupe that data on the storage system itself and free up storage for new data. Sales people love to sale new storage and help clients add on to their systems but you can extend the time to purchase by condensing the data you already have purchased.
4. How many systems are you running today? Can you consolidate those into a single platform? There are pros and cons of moving to a single platform, but instead of running three, think about running two? If you have multiple file servers, can you consolidate them on the N series system using its CIFS license? or Multi-store? With it on the storage side, you save money (AV agent license, backup license, OS license) and de-dupe the data so you don’t ‘need’ as much to begin with.
5. What is your backup/restore/retention schedule? Do you really need tape? It is 2010, people in 1980 were asking the same question and have been ever since. Tape still plays a big play in the data center today, but are you relying on it for restores? Whats your restore time? How reliable is the restore ?
There should be other parts of your plan but these questions should get you started. Take a different approach and interview other parts of IT like DB admins, developers even those network guys. Also different parts of the business, accountants, HR, Operations. You make get a better idea of their needs for the near future. Take the time to plan out the storage this year and hopefully it will save you not only time and money but a piece of mind as well
Posted by richswain in Oct 30, 2009, under News
Some of you had asked to know when this was released so here it is. Please, as always take time to read through the release notes and the admin guides to familiarize yourself with the changes. There are new commands and some have been retired.
A couple of key things for this release:
Support for the EXN 3000 disk shelf
Port based load balancing option for multimode VIFs
There is a lots of good information in the release notes that a lot of people miss and end up calling support. Everything in the document is linked so you can quickly move around and find things easily.
Posted by richswain in Oct 26, 2009, under Event
Event at the NYC Netapp office. Gain perspective on the business and architectural considerations required to support an end-to-end,
virtualized data center. Learn more about how the IBM N Series Storage uniquely compliments vSphere environments. Understand how the proven technologies such as deduplication, thin provisioning and snapshot can significantly reduce the amount of storage needed in a virtual environment while enhancing performance and streamlining administration.
Sponsered by IBM, Netapp, VMware and Brocade
The Virtualized Dynamic Data Center
Wednesday, October 28, 2009
3:30 – 5:30 Virtualized Dynamic Data Center Presentation
Breakout Sessions: Cloud Computing, Brocade,
IBM System X
Wine Tasting Reception
100 Park Avenue, 13th Floor, New York, New York -
Posted by richswain in Oct 22, 2009, under News
IBM N series Release News 3 of 4
The past two days I have been writing about new technologies IBM released on October 20th. There was plenty of hardware to help IBM customers to get more bang for the buck but there was also an addition the SnapManager suite; SnapManager for Hyper-V.
One of the biggest chances in the data center has been the server virtualization of the data center. This new idea of consolidating the server farm from multiple 1 U servers to a larger server hosting multiple virtual machines posed new challenges. Backup, restores, IO bottle necks; all new things we had to work through.
For many years VMWare was the main player in the virtual server game. As the virtual movement caught more traction with customers, other vendors started coming out with their own virtual products. Microsoft released their virtual product codename Viridian, with their Windows 2008 OS and named it Hyper-V.
IBM SnapManager suite allows customers to take a consistent backups of databases and Hyper-V is no different. Built on snapshot technology, SMHV takes snapshots of the VM. Some of the terms used with Hyper –V is different than VMWare but the idea is pretty much the same.
SMHV uses a group of objects called datasets. These datasets can have certain retention policies set on the dataset. The dataset is backed up collectively but can be restored individually.
There is also Snapmirror integration that helps mirror the datasets to another N series set of disks. Reporting is also a feature that is included.
If you have some time to review the SMHV product, there are a few links you should take a look at:
Posted by richswain in Oct 21, 2009, under News
IBM N series Release News 2 of 4
As I noted in yesterday’s blog update, IBM has released a wealth of new hardware and software on the N series platform. This release includes the first IBM storage platform to support FCoE and 10Gb Ethernet, a new disk shelf that is 30% more capacity than older technologies in a 4 U footprint, an additional SnapManger product called SnapManager for Hyper-V and the next generation Performance Acceleration Module. The technology behind the N series is geared to help our clients meet the new paradigm shift of doing more with their storage with higher availability, quicker recovery and less cost.
Today, I wanted to update you on the new generation of Performance Acceleration Module or PAM II for short. This second generation technology is built on the predecessor of only 16GB of memory. The idea is to increase the amount of read cache by using flash-based technology instead of adding more spindles to the entire system.
Traditionally we have needed to add more disks into the spindle count to increase our IOP throughput and lower our response time from a disk subsystem. Some have changed the disk type to a solid state disk which allows faster response and higher IO throughput for the data that is only on that chip. This can get expensive for larger data sets that still require ‘Tier 1’ throughput and it only benefits the data that is stored on the chips.
A comparison study (link) was done to see what the difference was between adding more drives or adding PAM II cards. A N6060 system with 84 disks were tested with an OLTP workload which is typically both random and sequential IO. It has been described as the best method to really test performance by not only IBM but most other storage vendors. The result was it took 140 more disks to the same throughput of adding 1 TB of PAM II cache. The cost for the PAM II cards was 50% cheaper than the purchase of 140 more disks. The other cost savings was in additional power and cooling and a total savings of 30 U in the rack.
The PAM II cards are SLC NAND Flash memory instead of the older SDRAM and comes in both 256GB and 512GB sizes depending on the N series system. A total of 2TB of cache can be added to the system for improving performance. They do require a PCI-E slot so only the N 6000 and the N7000 product systems can take advantage of this new technology.
If you are looking at increasing the performance of your system and want an alternate to spinning up more disk, PAM II might be your answer. PAM II is designed to help you with read heavy workloads like OLTP databases, messaging and virtual infrastructure. If you want more information about PAM II feel free to contact your local IBM Storage rep or a IBM Business Partner.
Posted by richswain in Oct 20, 2009, under News
IBM today announced releases for the N series platform including the first IBM Storage product to offer FCoE support. The N series platform now includes
- Performance Accelerator Module – Second Generation
- SnapManager for Hyper-V 1.0
- Fiber Channel over Ethernet
- 10Gb Ethernet support
- EXN3000 SAS Expansion Disk
Over the next 4 days, I will be writing about these technologies and how they related to the IBM Smarter Planet initiative. With these technologies, you will be able to consolidate your data center infrastructure, use less power and cooling and make your solution more efficient.
With budgets getting tighter and CIOs asking to do more with less, companies are asking IBM to help them achieve this shift in the paradigm. The IT department is being not only to provide more services at a higher availability but some are asked to be revenue centers for the company.
The new Performance Accelerator Module II (aka PAMII) is an upgrade to its smaller brother the PAM I. The new flash memory card can be added to systems (both N6000 and N7000) in 256GB and 512 GB cards. Unlike Solid State Drives that accelerate data stored on the chip, the PAM II effectively extends the buffer cache for all data. By adding memory cache to the system, it reduces the total number of IOPS the disk subsystem has to perform across the entire dataset stored in the system.
Server virtualization has become a hot topic with many companies looking to consolidate the data center footprint. Now IBM is adding to the list of SnapManager software packages support for MS Hyper-V. SMHV is built on the SnapManger product line that cut it’s teeth on Exchange and Oracle many years ago. Much like those products, SMHV allows customers to backup and recover their virtual machines within the storage protection policies set by the backup administrator. Clients that use SMHV will find it easier to rapidly provision and clone their virtual machines and the ability to recover one virtual machine if needed.
Server virtualization has been a hot topic for a few years, and people have started looking at storage virtualization/consolidation within the last year. Something that was left out was the networking in the data center. The introduction of Fibre Channel over Ethernet (FCoE) and 10Gb Ethernet is allowing companies to save more money by consolidating older 1Gb lines into one or two lines. IBM N series is the first storage system to support FCoE hardware. We are very proud to have this honor and as customers look to IBM to provide an FCoE or a 10Gb product, we have a proven product that will help consolidate their network infrastructure.
The new announcement of the EXN3000 disk shelf is part of the IBM Smarter Planet goal. With a total capacity of 48TB raw disk in a 4 U footprint, we can reduce the number of watts consumed (per TB) more than 10% per shelf. The EXN3000 can use either SAS or SATA drives in 12 or 24 drive configurations. This equates to about 30% more dense storage than the EXN2000 or EXN4000 shelves. With a higher density storage footprint, less power and cooling is required for fewer spinning disks.
These technologies allow IBM customers to meet the tightening parameters that are changing the paradigm in IT. IBM is constantly building on the proven N series platform to enhance and simplify our customer’s environment while increasing efficiency with the world’s resources. Come back tomorrow for part 2 of 4 in the week long update on the IBM N series release.
A well written blog was posted today on the Netapp site about best practices with Hyper-V. Chaffie McKenna is an architect for Microsoft Solutions Engineering. The article goes into different parts of setting up the network and the vms.
If you are looking to deploy Hyper-V in your environment you need to read this article.
I came across this page last night and thought what a neat and simple way to show the value of N series. It reminded me of those commercials from the insurance company. You know the one with the stack of money and the big rolly eyes?
This is something anyone can run with some basic numbers and get a good idea of what a N series system would save them. I think the best part is the reports at the end so make sure you click on the link for the full report.
Posted by richswain in Sep 28, 2009, under Tips and Tweaks
Now that VMWorld ’09 is over, there are tons of people talking, blogging, tweeting about everything VM. If you are looking at upgrading to vSphere, or just starting out there are some things you will want to know how to configure. As with any storage, there are different ways to install and configure and N series is no different.
I found this great Best Practice document from Netapp that goes through everything from how to setup the swap space to multipathing. If you are looking at going to the new VM platform, you need to read this document and take notes.