Novell Filr Technology Overview – Part 2

Part 1 of the Novell Filr Technology Overview was too heavy and I had to break up to share the feature of storage.

How will storage space look like to the different access methods or mobile device? Novell Filr does not deviate from the comfortable interface that is functionally similar to applications such as Dropbox. Under the guise of folders and files, the interface is a familiar one. It is called “MY FILES”.

But under the wraps of “MY FILES”, Novell Filr consolidates both Personal Storage and Net Folders locations under one roof. Here’s a look at “MY FILES” and how it consolidates various underlying file storage structure:

Continue reading

Novell Filr Technology Overview Part 1

I am like a kid opening presents on Christmas mornings today.

Reading and understanding the Novell Filr architecture is exciting with each feature revealing something different, some that may not be entirely unique, but something done simplified. Novell Filr has simplified a few things that are much more appreciated from storage guys like me. Let me share with you this technology learning session.

2 Key Features

First of all, I see the Novell Filr as a Secure Access Broker.

The Novell Filr provides file access, file sharing and file synchronization with multiple mobile devices. The mobility revolution in the likes of smart phones, tablets and other “connected” devices in our personal lives are changing our habits in the way we want information to be accessed, which I can summarize in 2 words – SIMPLE, UNINHIBITED. It is the lack of inhibition that scares the hell out of IT because IT is losing control, and corporations fear data leaks.

Novell Filr lets users access their home directories and network folders from their mobile devices. It lets the users synchronize their files with Windows and MacOS computers, regardless if these devices are internal of the company’s firewalled networks or external of it. Here’s a simple diagram of how Novell Filr defines its position as a Secure Access Broker.

Continue reading

The openness of Novell Filr technology

In the previous blog entry, I spoke about finally getting the opportunity look deeper into Novell Filr technology. As I continue my journey of exploration, I am already consolidating information about the other EFSS (Enterprise File Synchronization and Sharing) solutions out there.

Many corporate IT users are moving away from pedantic corporate IT control toward the seemingly easy to synchronize, easy to share, cloud-based services such as Dropbox and Box.net. This practice exposes a big hole in the corporate network, leaking data and files, and yet most corporate IT users are completely ignorant about such a irresponsible act.

Corporate IT users cannot blame IT for being a big A-hole because they keep tight controls of the network and security. It is their job to safeguard the company’s data and files for security, compliance and privacy reasons.

In the past 9-12 months, IT has certainly relaxed (probably “relented” is a better word) their uptight demeanour because they know they couldn’t stop the onslaught of BYOD (bring your own devices). The C-level and the senior management have practically demanded it and had forced their way to bring in their own smart devices and tablets to increase their productivity (Yeah, right!).

To alleviate data security concerns, MDM (Mobile Device Management) solutions are now hot items on the IT shopping list. Since we are talking about Novell, I also got to know that Novell also has an MDM solution called ZenWorks Mobile Management. Novell Zenworks is already well integrated with the proven Novell track record of user and identity management as well as integration with LDAP authentication systems such as Active Directory and eDirectory.

The collision of the BYOD phenomena and the need to securely share corporate data and files security conceives the Enterprise File Synchronization and Sharing market. Continue reading

Novell Filr about to be revealed

My training engagement landed me in Manila this week. At the back of my mind is Novell Filr, first revealed to me a week ago by my buddy at Novell Malaysia. After almost 18 months since I first wrote about it, Novell Filr is about to be revealed in my blog within this month. And it has come at an opportune time, because the enterprise BYOD/file synchronization market is about to take off.

Gartner defines this market as Enterprise File Synchronization and Sharing (EFSS) and it is already a very crowded market given the popularity of Dropbox, Box.net, Sugarsync and many, many others. It is definitely a market that is coveted by many but mastered by a few. There are just too many pretenders and too few real players.

The proliferation of smart phones and tablets and other mobile devices has opened up a burgeoning need to have data everywhere. The wonderfulness of having data right at the fingertips every time they are wanted give rise to the need of wanting business and corporate data to be available as well. The power of having data instantly at the swipe of our fingers on the touchscreen is akin us feeling like God, giving life to our communication and us making opportunities come alive at the very moment. Continue reading

Washing too much software defined

There’s been practically a firestorm when EMC announced ViPR, its own version of “software-defined storage” at EMC World last week. Whether you want to call it Virtualization Platform Re-defined or Re-imagined, competitors such as NetApp, HDS, Nexenta have taken pot-shots at EMC, and touting their own version of software-defined storage.

In the release announcement, EMC claimed the following (a cut-&-paste from the announcement):

  • The EMC ViPR Software-Defined Storage Platform uniquely provides the ability to both manage storage infrastructure (Control Plane) and the data residing within that infrastructure (Data Plane).
  • The EMC ViPR Controller leverages existing storage infrastructures for traditional workloads, but provisions new ViPR Object Data Services (with access via Amazon S3 or HDFS APIs) for next-generation workloads. ViPR Object Data Services integrate with OpenStack via Swift and can be run against enterprise or commodity storage.
  • EMC ViPR integrates tightly with VMware’s Software Defined Data Center through industry standard APIs and interoperates with Microsoft and OpenStack.

The separation of the Control Plane and the Data Plane of the ViPR allows the abstraction of 2 main layers.

Layer 1 is the abstraction of the underlying storage hardware infrastructure. Although I don’t have the full details (EMC guys please enlighten me, please!), I believe storage administrator no longer need to carve out LUNs from RAID groups or Storage Pools, striped and sliced them and further provision them into meta file systems before they are exported or shared through NAS protocols. I am , of course, quoting the underlying provisioning architecture of Celerra, which can be quite complex. Anyone who has done manual provisioning with Celerra Manager should know what I mean.

Here’s the provisioning architecture of Celerra:

Continue reading

Storage Facebook likes

There is a mini revolution going on, and Facebook is the main force driving it.

It is the Open Compute Project (OCP), and its mission is to redesign the modern-day data centers and drive open hardware and architectural designs and specifications, including storage. The overall goals are to drive greater data center efficiency, flexibility, energy savings and cost effectiveness in a new class of “hyperscale” datacenters. Facebook, Google and Amazon are some of the examples of hyperscale datacenters, where their businesses relies on massive computing power, exponential storage performance and racks and racks of computing infrastructure to drive their web-computing or cloud-computing services.

Some of the cool technology innovations in mind includes having systems that support any CPUs from any vendors including Intel and AMD. We may even see both processor brands running on the same motherboard. The Open Common Slots component for processors is based on PCIe. Intel has pledged their Decathlete motherboard specifications for OCP and likewise AMD has produced its Roadrunner mobo series specification for the project as well. The ARM processor could also be supported in the near future in this “mix-and-match” OCP ideals.

Other proposed changes include OpenRack specifications, “sleds”, and of course, the Open Vault project for storage (aka “Knox”). Continue reading

And Cloud Storage will make us even stranger

It was a dark and stormy night ….

I was in a car with my host in the stifling traffic jams on the streets of Jakarta. We had just finished dinner and his driver was taking me back to the hotel. It was about 9pm and we were making conversation trying to figure out how we can work together. My host, a wonderful Singaporean who has been residing in Jakarta for more than a decade and a half, owns a distributorship focusing mainly on IT security solutions. He had invited me over to Jakarta to give a talk on Cloud Storage at the Indonesia CIO Network event on January 9th 2013.

I was there to represent SNIA South Asia to give a talk about CDMI (Cloud Data Management Interface), and my host also took the opportunity to introduce Nutanix, a SAN-less 2-tier, high-performance, virtualized data center platform. (Note: That’s quite a mouthful, but gotta include all the buzz-words in there). It was my host’s first foray into storage networking solutions, away from his usual security solutions spread. As the conversation went on in the car, he said “You storage guys are so strange!“.

To many of the IT folks who have been involved in OS, applications, security, and networking, to say a few, storage is like a dark art, some mumbo jumbo, voodoo-like science known to a select few. That’s great, because this perception will keep us relevant, and still have the value and a job. To me, that just fine and dandy, and I like it that way. 🙂

In preparation to the event, I have to learn up SNIA CDMI. Cloud and Storage … Cloud and Storage … Cloud and Storage. Hmmm …. Continue reading

“Cloud” hosting hacked – customer data lost

Yes, Yes, I have been inactive for almost 2 months. There were many things I had to do to put my business back into shape again, and hence my lack of activities in my blog.

Yes, Yes, I have a lot of catching up to do, but first I would like to report that one of the more prominent web hosting companies (many of who frequently brand themselves as “Cloud” companies) in Malaysia have been hacked.

I got the news at about 8.00am on September 28th morning and I was in Bangalore, India. Friend of mine buzzed me on Facebook Messenger, and shared with me the following:

Thursday, September 27, 2012 1:46 AM
Date: 27th Sep 2012
Time: 6.01PM GMT +0800

We have an intrusion incident that happened early this morning around 12midnight of 27th September 2012. About 50 customers’ Virtual Machines hosted on our CLOUD were deleted from the cloud server. When we spotted the abnormal behavior, we managed to stop the intruder from causing more damages to our system.

From our initial investigation, we suspect one of our employees who will leave the company at this month end logged into one of our control panels and deleted some Virtual Machines. The backup was terminated at the same time when the Virtual Machines were deleted.

At this point of time, our team is working relentlessly on restoring the affected virtual machines and customer data.

In the mean time, my COO is lodging a police report and my manager is lodging a report to MyCERT while I am writing this email.

We are truly sorry about the whole incident as it has caused a great deal of inconvenience to our customers and their end customers as well.

Please also be rest assured that our CLOUD is truly secured; this incident was not a successful hacking attempt but rather sabotage via an ordinary login.

Detailed investigation reports will be compiled and sent to our customers.

Sincerely,

Chan Kee Siak
Founder and CEO

===================================
Summary / History of issues:
===================================
27th Sep 2012,

1.00am:
- We detected several virtual machines on the cloud were throwing warning signals.
- Technical Managers were immediately informed.

01.30am:
- We found out that an intruder was attempting to delete some of the virtual machines on our CLOUD cluster.
- The intruder was using a valid login to access our CLOUD control panel.
- COO was informed, signed in to co-ordinate.
- The access of the intruder has been disabled to prevent further damage.
- We posted an announcement at: https://support.exabytes.com.my/News/2248/c...aintenance.aspx

02.00am:
- CEO was informed.
- We found out that the intruder was using the login ID and password which belonged to one of the staff members whom we had recently sent out termination notice. The last working day of this staff was end of this month.
- Around 50++ Virtual Machines / VPS were affected.
- We started to inform affected customers.

02.30am:
- Rebuild and restoration of virtual machines began.

10.00am:
- Some Virtual Machines were Restored. The rest were still pending, on going.
- For Virtual machines without extra R1Soft Backup, we have recreated blank virtual machines with Operating System.

12:30pm:
- Attempted to recover the deleted backup on the CLOUD Backup server via data recovery tool. No guarantee and no ETA yet, we were doing our very best.

5.39pm:
- 80% of virtual machines were recreated. However, some were without the latest backup of data.
- Our engineers were attempting to recover the Cloud Backup Hard Drive with the use of recovery tool. However, as the size was huge, it might take few more hours.

Damage:
- The CLOUD Accounts, Virtual Machines and CLOUD Backup of affected clients were deleted. Only client with additional R1Soft backup still has the recent backup.

=================================

Date: 27th September 2012
Time: 1:55 AM GMT+8

Maintenance Details:
We have been alert by our monitoring system that certain Cloud VM has been found to be inaccessible. Our senior admin engineers are now working to resolve the issues.

Maintenance effect:
VMs affected isolated under MY-CLOUD-02 Zone.

We regret for any inconveniences caused.

Best regards,

Support team
------------------
Technical Support Department.

Continue reading

Houston, we have an OpenStack problem

I have always wanted to look deeper into OpenStack, but I never got around to it. However, last week, something about NASA and OpenStack caught my attention … something about NASA pulling out of OpenStack development.

The spin was that “OpenStack has come on its own” is true, because OpenStack today has 180 (at last count on June 20th 2012) companies participating and contributing to the development, deployment and marketing of the highly popular Infrastructure-as-a-Service cloud computing project. So, the NASA withdrawal was not as badly felt as to what NASA had said next.

When NASA CIO Linda Cureton announced that NASA has shifted to Amazon Web Services (AWS) for their enterprise cloud-based infrastructure and they have saved almost a million dollars in costs, that was a clear and blatant impalement to the very heart and soul of OpenStack. NASA, one of the 2 founders of OpenStack in 2009, has switched sides to announce their preference to OpenStack’s rival, AWS. It pains me to just listen to the such a defection. Continue reading

SMP than VMware

VMware is not a panacea for all your server virtualization requirements but because they do fantastic marketing (not to mention doing 1 small seminar every 1.5-2 months here in Malaysia last year), everyone thinks they are the only choice for server virtualization.

Efforts from Citrix Xen, Microsoft Hyper-V and RedHat Virtualization do not seem to make a dent into VMware’s armour and it is beginning to feel that VMware is the only choice for server virtualization. However, every new server virtualization proposal would end up with the customer buying a brand new, much more powerful server. More CPUs, more cores, and more RAM (I am not going into VMware vRAM licensing issues here but customers know they are caged-in).

You see, VMware’s style of server virtualization is a in-system virtualization. The amount of physical resources within the system are being pooled, virtualized and shared with the virtual machines (VMs) in the physical chassis. With exception to the concept of distributed vSwitches (dvSwitch), CPUs, processing CPU cores and RAM are pretty much confined within what’s available in the physical box in most server virtualization environment. You can envision the concept of VMware’s in-system virtualization in the diagram below:

So, the consolidation (and virtualization) phase of older physical servers would involve packing tons of CPU cores and tons of RAMs in a newer, high end server.

I just visited a prospect a few days ago. For about 30 users for an ERP system and perhaps 100 users of Zimbra mailboxes, he lamented that he had to invest into 2 Dell R710 servers with 64GB of RAM each and sporting 2 x 8-core Intel Xeon. That sounded to like an overkill but that is what is happening here in this part of the world. The customer is given the perception and the doubt of inadequacy when they virtualize their servers. “What if I don’t have enough cores?; what if I don’t have enough RAM?” That in itself is the typical Malaysian (and Singaporean) kiasu mentality. Check out the Wikipedia definition of kiasu here.

Such a high-end server costs a lot of moolahs. And furthermore, the scalability and performance of the virtualized servers in the VMs are trapped within how much these servers can scale physically. If the server is maxed out at 16-cores and 128GB of RAM, then the customer to upgrade again with a server forklift. That’s not good.

And one more thing. VMware server virtualization is not ready for High Performance Computing (HPC) …yet.

Let’s look at this in another way. Let’s assume that you can look the server virtualization approach in an outward manner rather than the inward within kind of thinking, like the VMware in-system method.

What if you can invest in lower-end x86 servers with 1 x quad-core CPUs, with 8GB of RAM? What if you can put aggregate many of these lower-end servers together and build a large cluster of lower-end x86 servers into a huge symmetric multiprocessing server farm that supports 1,024 CPUs of 16,384 cores, 64TB of RAM? Have a look at this video that explains what I just mentioned:

ScaleMP video

Yeah, yeah .. it’s a marketing video from ScaleMP. But I am looking beyond the company and looking at the possibility of this out-system type of server virtualization. The ability to pool together all the CPU processing power of many physical servers and the aggregation of physical RAMs of all the combined servers into a single shared memory architecture unleashes the true power of server virtualization. This is THE next generation symmetric multiprocessing (SMP) architecture, and it breaks free from the limitations and scalability the in-ward virtualization of physical servers.

In the past, SMP system rely on heavy programmability of the applications to scale with SMP systems. Applications didn’t necessary scale on-the-fly with SMP systems, and some level of configuration and programming have to be applied to address the proprietary  SMP methods and interconnects. ScaleMP’s vSMP Foundation hypervisor solution removes the proprietary nature of SMP and bringing x86 server virtualization to meet the demands of HPC.

Here’s a look at the high level architecture of ScaleMP vSMP:

This type architecture brings similarity to RNA Networks solutions that I blogged some time ago. RNA Network, which was acquired by Dell late last year, based their solution on the RDMA technology and protocol, and was more about enhancing scalability and performance with memory pooling via Memory Cloud. ScaleMP’s patent-pending technology is more than that. It pools both memory and processing cores as well, giving it greater scalability and performance, the much needed resources for the demands of HPC environments.

The folks at ScaleMP contacted me a couple of weeks back and shared some of their marketing datasheets and whitepapers. While the information passed to me were OK, I wish the information could have a deeper dive into the technology and implementation as well. I hope they could share it, and I don’t mind signing an NDA.

Well, this is done pro bono, because I want everyone to know the choices and possibilities out there. It is my worldly cause to have people educated because only by being informed, we make better choices. The server virtualization world isn’t always about VMware, you know.