Posted by richswain in Feb 23, 2010, under News
The old adage of faster, smaller, cheaper has been revived in the N series product line. This week (officially) IBM released the information around the highly anticipated OEM re-brand of Netapp’s FAS 2040; the N34000. This system has a small 2U form factor but delivers higher performance than its beefier brother the N3600. If you want to see a full comparison of the three boxes, click here for more information.
IBM has three systems that round out the entry level or departmental storage platform. The N3600, the N3300 and now the N3400. All three are based on internal drives with some expansion to a few shelves as needed. The N3600 comes with 20 internal drives and the smaller N3300 and N3400 comes with only 12 internal disks and can expand to a maximum capacity of 136TB. There are two controllers that allow administrators to have a high availability solution for low cost. This makes the system more attractive as it also supports FCP, iSCSI, CIFS and NFS all from one platform.
The N3400 does have a few things I want to point out:
- 8GB of RAM (2x the amount in the N3600 and 4x the amount of the N3300)
- 512 mb NVRAM
- 2 integrated SAS ports and 8 total 1GBPS Ethernet port
- PCI-e port for expansion
All of these help set this box up for an important role within your datacenter. If you compare this system with other storage systems in the market, you find the new N3400 is well stacked and can compete even with larger mid-tier systems. This box is ideal for our SMB clients who really nee the all in one system with the horsepower to keep up with a growing company. The system is a long way from the first entry level system IBM decided to roll out, the N3700. If the two were to be compared the N3700 would be a ‘Happy Meal’ and the N3400 would be a super sized 2lb Angus burger with fries and shake, maybe even an apple pie.
This new system is considered ideal for both Windows consolidation and virtual environments alike. With the additional ports the system does leverage a larger life span as the new EXN 3000 SAS shelves are becoming more of the standard for the N series product line. The system on the other hand does not support 10GBPS cards or FCoE as the N3600 does. But as all N series systems support the same Data Ontap code, the robust system uses the same commands, interface and is bulit on the same technology as the other N60x0 and N7X000 lines.
Overall, this is an enhanced refresh of the exisitng N3300 with more ability to scale with currently technologies. The performance will be more than the N3600 which begs the question of the need for the N3300/N3600 systems. I suspect as Data Ontap 8 becomes general available from Netapp, there will be more entry level storage devices released.
For more information on the N3400 and all other N series related information, follow this link or contact your local IBM Storage Rep.
The N3400 series systems have been designed to provide double performance and twice the capacity than its predecessor, at an affordable price.
IBM System Storage N3400 series is the newest member of the N series entry-level family of products. This efficient, high performer,
easy to manage N series system can help customers to consolidate their storage in remote and branch offices, offering double the capacity than its predecessor.
Following are the main advantages of the new N3400 series systems:
New 600GB FC and SAS drives (available for all N series products)
Advanced performance for small or midsize enterprise deployments
Increased maximum capacity
Simple to upgrade, deploy and manage without need for extensive resources
Do you have older storage on your floor taking up space and is too costly to replace? Why not use a N series Gateway to help add value to that asset and ‘recycle’ the storage for other uses. We are all looking for ways to do more with fewer resources and demand on IT keeps getting larger.
N series storage can use older disk subsystems at the same time using its own native disks. It allows clients to use technology like deduplication, snapshots and thin provisioning with out having to rip and replace your existing footprint. The Gateway uses the underlying storage structure’s RAID for protection so there is no bloat with additional raid from the gateway itself.
If you have multiple storage units, no problem. We can connect via Fibre Channel to multiple storage platforms. This allows us to create teirs of storage for your applications so that slower systems are used for archives and faster systems are used for production data. Once the gateway is configured, the N series will present data just as if the disk were native.
We also add value for systems that don’t support certain technologies. For example, if you are interested in moving into 10Gbps networking. N series is the only platform that supports it at this time. You could use your storage system that you have now, put a gateway in front of it with a 10Gbps card and voila’. It is that easy.
Now you don’t have to use older storage appliance behind a gateway, we are using gateways in front of XiV, SVC and DS5k /8k. Now if you are looking for just Fibre Channel attachment this might not be a good fit but if you are looking for multiprotocol, application integration with Exchange/SQL/Oracle/VMware and the like. We can use the underlying storage system and make it more productive with the N series portfolio.
Other uses for a Gateway include disaster recovery, archiving, VDI projects, Dev and Test environments and much more. We see Gateways being used for data migrations, for site to site mirroring and recovery of data centers.
For more information about N series Gateways, check out the IBM site for N series
(formerly known as the IBM System Storage Symposium.)
If you sign up early you get access to both the Storage University and the System X and Blade Center University. They are at the same time and you can walk from one to other as needed.
There is a ton of information about IBM Storage at this event, if you have never been before, you should atleast take a look at the amount of material that will be covered.
IF your are interested in presenting or would like to know more feel free to reach out to me and I can put you in contact with our staff.
There is a demo on January 14 that will go over the basics of N series and Data ONTAP, and more. Here is the description along with the link to sign up.
WHEN: Thursday, January 14th, 10-11:30am CST.
PRESENTED BY: Gary Sewell
The topics that will be discussed during this N series presentation are:
1. Simplifying Data Management
2. Storage Efficiency
3. Protecting mission critical business applications (Oracle, Exchange, SQL, VMware & SAP) better than our competitors
4. Most importantly, see how we recover these applications in a matter of minutes!
Posted by richswain in Jan 04, 2010, under News
Welcome back from the holiday season! I am sure some of you worked through the holidays and hopefully most of you had time to spend time with friends and family the past few weeks. I have been doing some of both, IBM doesn’t shut down for the holidays but I was able to spend more time at home with my family just the same.
As we look forward to the first year of this decade, storage admins are facing new issues with conserving storage. The footprint is (and has been for some time) getting larger, we are keeping data longer, and everyone wants more space for less money without a performance penalty. I am sure you seeing similar issues and more in your own data centers, but what are storage vendors doing to help you solve these issues?
If you take a look at your storage plan today, does it match your business plan or a strategic plan? If your company has a strategic plan in place how can you look at that and gauge the storage plan? You may not have a plan in place today but that doesn’t have to stop you from looking 6 months ahead.
A couple of factors that can help you put a plan together.
1. Where is your data today? Clients are always looking to put the right data on the right system. Look for the benefits of both hardware (cache, processor, HBA, NICs) and software (snapshots, mirroring, data locking) to match a data type to the system. Does your Windows File server really need to run over the Fibre Channel SAN?
2. Is it on the right disk type or speed? Different disk types have different disk I/O measurements. Look at how your SATA vs FC vs SAS drives match the data set. Are you using high end disk for data that is hardly being used?
3. Can you be saving space by de-duping your data? We all multiple copies of the same files out there on storage. Why keep multiple copies of the same files/blocks if you can de-dupe that data on the storage system itself and free up storage for new data. Sales people love to sale new storage and help clients add on to their systems but you can extend the time to purchase by condensing the data you already have purchased.
4. How many systems are you running today? Can you consolidate those into a single platform? There are pros and cons of moving to a single platform, but instead of running three, think about running two? If you have multiple file servers, can you consolidate them on the N series system using its CIFS license? or Multi-store? With it on the storage side, you save money (AV agent license, backup license, OS license) and de-dupe the data so you don’t ‘need’ as much to begin with.
5. What is your backup/restore/retention schedule? Do you really need tape? It is 2010, people in 1980 were asking the same question and have been ever since. Tape still plays a big play in the data center today, but are you relying on it for restores? Whats your restore time? How reliable is the restore ?
There should be other parts of your plan but these questions should get you started. Take a different approach and interview other parts of IT like DB admins, developers even those network guys. Also different parts of the business, accountants, HR, Operations. You make get a better idea of their needs for the near future. Take the time to plan out the storage this year and hopefully it will save you not only time and money but a piece of mind as well
Posted by richswain in Nov 18, 2009, under News
There are a few new training classes that IBM is offering and our training partner FastLane.
If you are looking for simple, self paced online training check here
For more traditional classes on N series:
Also our training partner Fastlane now has N series training. Here is an excerpt from their site:
“Fast Lane is now an IBM N series Authorized training partner for the IBM System Storage N series product line. Fast Lane is now offering classroom courses for the entire IBM N series Unified Storage Solutions product line for IBMers, Business Partners and Customers.
In addition to our publicly available courses, Fast Lane can bring these standard courses or customized versions of these courses to your location at your request.”
If you are looking for training these are a good start, and remember you can always take classes at the Netapp University.
Posted by richswain in Oct 30, 2009, under News
Some of you had asked to know when this was released so here it is. Please, as always take time to read through the release notes and the admin guides to familiarize yourself with the changes. There are new commands and some have been retired.
A couple of key things for this release:
Support for the EXN 3000 disk shelf
Port based load balancing option for multimode VIFs
There is a lots of good information in the release notes that a lot of people miss and end up calling support. Everything in the document is linked so you can quickly move around and find things easily.
Posted by richswain in Oct 23, 2009, under News
IBM N series Release News 4 of 4
TGIF everyone! Our final installment of the IBM N series release news. I have updated you on the new generation PAM II Cards, SnapManger for Hyper V and today we will discus 10GB Ethernet/FCoE.
Consolidation is a great topic you will find lots of bloggers talking about and we mainly talk about either server consolidation with virtualization or storage consolidation with deduplication or cloning technologies. But now we have the technology to consolidate your network topology into larger pipes using 10GB ethernet.
This idea of moving from a slower network speed like 10 MB to 100MB or 100mb to 1GB has been all based on faster performance and building a bigger pipe to applications. Now that we can push 10GB there is a different idea behind the movement.
The movement comes on the heels of virtualization and ‘Doing more with less’ mentality. No longer are we looking for just faster speed, but we are looking at the cost that we can save by consolidating all of the 1GB links down into a single 10GB link.
I still think most people will recommend two 10GB links to help with high availability but why not use 1 10GB port as a primary and then a few 1GB links teamed (Vifed together) as a backup or failover. This will allow you to utilize the speed and performance with the hit of having to use two 10GB ports on your switch.
Just like the 1GB switches when they arrived on the data center scene, 10GB switches are still expensive. But if you look at history, the cost per port will start to come down after the ‘newness’ of it wears off and more and more people start adapting it as their standard.
IBM N series now supports the 10GB ethernet card with dual ports. This allows you to connect to two different switches or create a 20GB vif. The setup of the 10GB is the same as the 1 GB ehternet ports you have been using for years.
IBM N series is the first storage product (and only as of now) to support FCoE. The FCoE cards do come in two different flavors, fibre or copper ports. They are both setup as target cards and use a PCIe slot.
Posted by richswain in Oct 22, 2009, under News
IBM N series Release News 3 of 4
The past two days I have been writing about new technologies IBM released on October 20th. There was plenty of hardware to help IBM customers to get more bang for the buck but there was also an addition the SnapManager suite; SnapManager for Hyper-V.
One of the biggest chances in the data center has been the server virtualization of the data center. This new idea of consolidating the server farm from multiple 1 U servers to a larger server hosting multiple virtual machines posed new challenges. Backup, restores, IO bottle necks; all new things we had to work through.
For many years VMWare was the main player in the virtual server game. As the virtual movement caught more traction with customers, other vendors started coming out with their own virtual products. Microsoft released their virtual product codename Viridian, with their Windows 2008 OS and named it Hyper-V.
IBM SnapManager suite allows customers to take a consistent backups of databases and Hyper-V is no different. Built on snapshot technology, SMHV takes snapshots of the VM. Some of the terms used with Hyper –V is different than VMWare but the idea is pretty much the same.
SMHV uses a group of objects called datasets. These datasets can have certain retention policies set on the dataset. The dataset is backed up collectively but can be restored individually.
There is also Snapmirror integration that helps mirror the datasets to another N series set of disks. Reporting is also a feature that is included.
If you have some time to review the SMHV product, there are a few links you should take a look at: