Monday, April 30, 2012

OpenStack gains momentum in Open Source Cloud Platform



Rackspace, in partnership with NASA, began its efforts to create an open source platform for the cloud back in 2010. Unlike other standards efforts that were already underway, OpenStack provided actual software providers with the ability to create their own cloud products. Since that time, dozens of companies, like Dell, IBM, and Cisco, have joined the effort to create a common cloud infrastructure. Although Rackspace has been a vocal proponent of the platform, until now the company leveraged the proprietary SliceHost technology it acquired a four years ago as the basis of its product cloud portfolio.

As noted on Midsize Insider, Hewlett-Packard (HP) announced the launch of its OpenStack-based cloud product family weeks before Rackspace. However, Rackspace's move is likely to garner more industry attention. Not only is Rackspace one of the first contributors to OpenStack, it is also the biggest cloud infrastructure provider after Amazon Web Services (AWS). Any missteps by Rackspace could have a tremendous impact on industry adoption of the open source cloud operating system, which is why the company is being cautious in its deployment strategy.

According to CMSWire, Rackspace is allowing registration for Cloud Servers, the company's core offering, and will begin providing access on May 1, 2012. Rackspace has not provided firm dates for other products in the portfolio. The cloud-based MySQL offering, Cloud Databases, and monitoring solution, Cloud Monitoring, are in early access, which signifies that the products are "production workload ready but have limited support available, no service commitments and no billing."

The remaining products, Cloud Control Panel, Cloud Block Storage, and Cloud Networks, are available in preview. Nobody is surprised that Rackspace is transitioning its cloud products to OpenStack; however, most industry insiders did not expect the company to move so quickly. Rackspace might have felt pressure to move forward after Citrix abandoned OpenStack in favor CloudStack, another open source cloud standardization effort.

All of the activity around OpenStack and other cloud standardization efforts isn't just a clever marketing exercise or a petty feud among technology companies. Cloud use has increased rapidly, but cloud standards are still in an embryonic state. In fact, it was only last year that the National Institute of Standards and Technology (NIST) approved a common definition of cloud computing. Programming interfaces, architecture, storage strategy, and almost every other aspect of cloud services can vary among vendors. This lack of mature standards can make it extremely difficult and expensive for organizations to move between cloud providers if they are unsatisfied with a vendor or their needs change.

Until standards are defined and widely accepted, businesses using the cloud are at a relatively high risk for vendor, data and solution lock-in. These risks are common with new and emerging technology. However, the cloud can amplify these challenges because of how broadly and deeply the technology can be integrated. The risk only grows as organizations store more data and build more solutions in the cloud. The impact of these issues goes beyond technology teams. Challenges moving between cloud vendors or services can also reduce business agility and competiveness by increasing the time, effort, and investment required to modify solutions and processes.

OpenMAMA Project rallies around by Financial Services Industry



The Linux Foundation Enterprise End User Summit -- OpenMAMA, a Linux Foundation Labs project, today announced the availability of OpenMAMA 2.1, the open source Middleware Agnostic Messaging API and the first neutral standard in data messaging for financial services.

The project today is also welcoming three new Steering Committee participants: IBM, Tick42 (formerly known as DOT) and TS-Associates. Founding members for the project include Bank of America Merrill Lynch, EMC, Exegy, Fixnetix, and NYSE Technologies, who originally created the MAMA platform.

Announced last fall, the OpenMAMA project is an open source Application Programming Interface (API) that connects multiple transports and applications. Collaboration among companies and the Linux and open source communities today is enabling users to embrace new, common middleware technologies and applications as the market changes. It also helps organizations reduce their total cost of ownership and time to market for "event-driven" applications and ensures high performance, both in terms of throughput and message latency.

Features and technologies included in OpenMAMA 2.1 include:

1. The Middleware Agnostic Market Data API (MAMDA), which adds a market-specific data application development framework that includes support for quotes, trades and order books. OpenMAMDA will ultimately serve as the reference implementation of the NYSE Data Model.
     
2. Support for C++, Java and Windows. OpenMAMA is built using the C programming language and is designed to run on Linux. New    contributions include wrappers that allow programmers to develop     OpenMAMA applications written in C++ and Java, and to deploy these applications on Windows as well as Linux.
     
3. Integration of NYSE Technologies' Open Data Model Project, which provides an end-to-end open platform for facilitating and simplifying the consolidation of market data with independent technology, encoding formats and distributions mechanisms. This will include support for 200+ datafed venues globally and is expected to be compliant with NYSE Technologies' Market Data Platform.
     
4. Completion of the AMQP Bridge, which integrates OpenMAMA with the standard protocol for interoperability among all messaging middleware.
                     
5. Vendor adoption. For example, Exegy today is announcing it is now delivering data from an Exegy Ticker Plant through OpenMAMA.
     
"The OpenMAMA project represents what all Linux Foundation Labs projects represent: collaboration at its best," said Jim Zemlin, executive director at The Linux Foundation. "OpenMAMA and open source software are accelerating for the messaging layer of the stack what Linux is doing for the OS-level of the software stack: driving innovation to support the largest number of complex transactions in real-time. The companies contributing to these efforts are leaders in their respective areas, and we're happy to see the tremendous progress that together they're making."

"We are truly excited to see OpenMAMA delivered to the financial services community to become a core building block of future innovation in trading technology," said Stanley Young, CEO, NYSE Technologies. "I thank my colleagues at NYSE Technologies and the OpenMAMA steering committee for their leadership and dedication to bringing this first of a kind initiative to the marketplace."

The OpenMAMA project members are meeting today with Linux kernel developers at The Linux Foundation Enterprise End User Summit to discuss these new features and to discuss what comes next. This invitation-only event this year is hosted by NYSE Technologies in New York.

Sunday, April 29, 2012

Penguicon 2012 Open Source Software & Science Fiction Convention




Penguicon, an open-source software and science fiction gathering that expects to draw more than 1,200 fans to the Regency Hyatt hotel in Dearborn this weekend, is a great example of a convention that breaks with convention.

After all, there's no rule stating science fiction conventions must be restricted to middle-aged men prancing around in alien costumes while discussing Isaac Asimov's "Three Laws of Robotics."

The decade-old Peguicon (named for Tux, the Linux penguin mascot) started out as a gathering of fans with crossover interests in computer programming and speculative fiction, but in recent years it's opened up its umbrella to include foodies, costumers, musicians, gamers, microbrewers and others.

The weekend's events will offer presentations of all kinds, from an introduction to 3D printers to a lecture on "How Not To Get Arrested For Your Alternative Lifestyle."

Conference-goers will also have the opportunity to join a zombie walk through the hotel, to go gothic belly dancing, to make liquid nitrogen ice cream, to engineer a customizable marble roller-coaster, to parade around in a drag show and even to take a quick trip to the shooting range with a group called Geeks with Guns.

Considering that Penguicon is a celebration of open-source software, it's not surprising that the event has branched out to include other DIY fields, nor that it's run by a not-for-profit team of collaborators.

"I think first and foremost, 'open source' means run by the fans," said Christine Bender, head of operations for Penguicon 2012. "Every person that works on Penguicon is a volunteer. We all do it because we love it."

This year's guest speakers include the Hugo award-winning science fiction writer John Scalzi, esteemed software developer Jim Gettys, and MadHatter from the open source record label Scrub Club.

Bender said the convention is an opportunity to meet with inspiring creators. "When are you going to get a chance to hang out with them for a whole weekend? You might play a board game with them or have a beer at the bar," she said. "It's an interaction you might not have with them any other way."

BeOS is now Haiku




It was the summer of 2001, and computer programmer Michael Phipps had a problem: His favorite operating system, BeOS, was about to go extinct. Having an emotional attachment to a piece of software may strike you as odd, but to Phipps and many others (including me), BeOS deserved it. It ran amazingly fast on the hardware of its day; it had a clean, intuitive user interface; and it offered a rich, fun, and modern programming environment. In short, we found it vastly superior to every other computer operating system available. But the company that had created BeOS couldn’t cut it in the marketplace, and its assets, including BeOS, were being sold to a competitor.

Worried that under a new owner BeOS would die a slow, unsupported death, Phipps did the only logical thing he could think of: He decided to re-create BeOS completely from scratch, but as open-source code. An open-source system, he reasoned, isn’t owned by any one company or person, and so it can’t disappear just because a business goes belly-up or key developers leave.
Now if you’ve ever done any programming, you’ll know that creating an operating system is a huge job. And expecting people to do that without paying them is a little nuts. But for the dozens of volunteer developers who have worked on Haiku, it has been a labor of love. In the 11 years since the project began, we’ve released three alpha versions of the software, and this month we expect to release the fourth and final alpha. After that we’ll move to the beta stage, which we hope to get out by the end of the year, followed by the first official release, known as R1, in early 2013.

Even now, anybody can install and run the operating system on an Intel x86-based computer. Many of those who have done so comment that even the alpha releases of Haiku feel as stable as the final release of some other software. Indeed, of all the many alternative operating systems now in the works, Haiku is probably the best positioned to challenge the mainstream operating systems like Microsoft Windows and Mac OS. For both users and developers, the experience of running Haiku is incredibly consistent, and like BeOS, it is fast, responsive, and efficient. What’s more, Haiku, unlike its more established competitors, is exceedingly good at tackling one of the toughest challenges of modern computing: multicore microprocessors. Let’s take a look at why that is, how Haiku came to be, and whether the operating system running on your computer really performs as well as it should.

First, a little history. In 1991, a Frenchman named Jean-Louis Gassée and several other former Apple employees founded Be Inc. because they wanted to create a new kind of computer. In particular, they sought to escape the backward-compatibility trap they’d witnessed at Apple, where every new version of hardware and software had to take into account years of legacy systems, warts and all. The company’s first product was a desktop computer called the BeBox. Finding no other operating system that met their needs, the Be engineers wrote their own.

Released in October 1995, the BeBox didn’t last long. BeOS, on the other hand, quickly found a small yet loyal following, and it was soon running on Intel x86-based PCs and Macintosh PowerPC clones. At one point Apple even considered BeOS as a replacement for its own operating system. The company eventually released a stripped-down version of BeOS for Internet appliances, but it wasn’t enough. In 2001, Palm acquired Be for a reported US $11 million.

Saturday, April 28, 2012

SpringSource Releases Version 1.0 of Cloud Foundry Eclipse Plugin


SpringSource has released version 1.0 of Cloud Foundry integration for Eclipse, allowing developers to manage Cloud Foundry applications without leaving the IDE. Cloud Foundry is an open source PaaS solution developed by VMware/SpringSource (currently in Beta) that supports multiple programming languages (including both Java and Scala).



There are four different ways an application may be deployed to a Cloud Foundry instance:



From the command line via the VMC tool
As part of a Maven build via the Cloud Foundry Maven plugin
From the Spring Roo Interface if the application uses Spring Roo
Directly from the Eclipse IDE via the Cloud Foundry Eclipse plugin
The last case is the most interesting one for Java/Scala programmers using Eclipse since it offers them an integrated solution where the IDE is used both for development and management/deployment of Cloud Foundry applications. The solution comes in the form on an Eclipse plugin that can be installed either on STS (the Eclipse powered SpringSource IDE) or Eclipse Indigo via the Eclipse Marketplace. Notice that the Cloud Foundry plugin is self-contained and can be installed on a plain Eclipse installation, that does not contain the Spring-IDE plugin.

Once installed, the Cloud Foundry plugin utilizes the WTP infrastructure of Eclipse. Cloud Foundry instances are defined as "servers" and applications are deployed in a similar manner to normal JavaEE application servers. If you want to use the VMware hosted version of Cloud Foundry you should register first in order to obtain an account.

Cloud Foundry applications can use one of the many supported services (e.g MySQL, PostgreSQL, MongoDB). The initialization and binding of these services to deployed applications can be performed in a visual way from configuration screens similar to those that hold settings for applications servers. It is also possible to view remote files (e.g. logs) that reside in the managed Cloud Foundry instance. Finally it is possible to debug applications provided that they run in a locally hosted Cloud Foundry or a Micro instance.

Whilst it is great to see Cloud Foundry tooling continuing to expand we do feel that this version could be improved in a couple of ways. For one thing there is a minor usability issue in that in the first screen where a Cloud Foundry instance is configured you are always expected to define the "server" as localhost. In the second screen you define the actual instance which can very well be the hosted version from VMware (contradicting the setting from the previous step). We found this somewhat confusing.

A more serious issue however is the kind of Eclipse projects that you can deploy in a Cloud Foundry instance via the plugin. According to the documentation the following project types are supported:

1. Java Web
2. Spring
3. Grails
4. Lift

However a newly created Eclipse project with the "Java Web" facet is based by default on Servlet Spec 3.0 which is not supported from the Cloud Foundry plugin at the time of writing. Only Servlet Spec 2.5 is supported. This option cannot change however after project creation. So one must delete all Eclipse specific project settings from the application and re-create the facet using version 2.5 which works as expected.

Commercial Cloud Foundry pricing and the exit out of Beta is expected to be announced during this year (2012).

7 programming myths - busted.


Even among people as logical and rational as software developers, you should never underestimate the power of myth. Some programmers will believe what they choose to believe against all better judgment.

The classic example is the popular fallacy that you can speed up a software project by adding more developers. Frederick P. Brooks debunked this theory in 1975, in his now-seminal book of essays, "The Mythical Man-Month."

Brooks' central premise was that adding more developers to a late software project won't make it go faster. On the contrary, they'll delay it further. If this is true, he argued, much of the other conventional wisdom about software project management was actually wrong.

Some of Brooks' examples seem obsolete today, but his premise is still sound. He makes his point cogently and convincingly. Unfortunately, too few developers seem to have taken it to heart. More than 35 years later, mythical thinking still abounds among programmers. We keep making the same mistakes.

The real shame is that, in many cases, our elders pointed out our errors years ago, if only we would pay attention. Here are just a few examples of modern-day programming myths, many of which are actually new takes on age-old fallacies.

Programming myth No. 1: Offshoring produces software faster and cheaper

These days, no one in their right mind thinks of launching a major software project without an offshoring strategy. All of the big software vendors do it. Silicon Valley venture capitalists insist on it. It's a no-brainer -- or so the service providers would have you believe.

It sounds logical. By off-loading coding work to developing economies, software firms can hire more programmers for less. That means they can finish their projects in less time and with smaller budgets.

But hold on! This is a classic example of the Mythical Man-Month fallacy. We know that throwing more bodies at a software project won't help it ship sooner or cost less -- quite the opposite. Going overseas only makes matters worse.

According to Brooks, "Adding people to a software project increases the total effort necessary in three ways: the work and disruption of repartitioning itself, training new people, and added intercommunication."

Let's assume that the effort required for repartitioning and training is the same for outsourced projects as for homegrown ones (a dangerous assumption). The communication effort required for outsourcing is much higher. Language, culture, and time-zone differences add overhead. Worse, offshore development teams are often prone to high turnover rates, so communication rarely improves over time.

Little wonder there's no shortage of offshoring horror stories. Outsourcers who promise more than they deliver are a recurring theme. When deadlines slip and clients are forced to finish the work in-house, any putative cost savings disappear.

Offshoring isn't magic. In fact, it's hard to get right. If an outsourcer promises to solve all of your problems for nothing, maintain a healthy skepticism. That free lunch could end up costing more than you bargained for.

Programming myth No. 2: Good coders work long hours

We all know the stereotype. In popular culture, programmers stay up late into the night, coding. Pizza boxes and energy-drink cans litter their desks. They work weekends; indeed, they seldom go home.

There's some truth to this caricature. In a recent analysis of National Health Interview Survey data, programming tied for the fifth most sleep-deprived profession. Long hours are particularly endemic in the video game industry, where developers must endure "crunch time" as deadlines approach.

But it doesn't have to be that way. There's plenty of evidence to suggest that long hours don't increase productivity. In fact, crunch time may hurt more than it helps.

There's nothing wrong with putting in extra effort. Fred Brooks praises "running faster than necessary, moving sooner than necessary, trying harder than necessary." But he also warns against confusing effort with progress.

More often than not, Brooks says, software projects run late due to chronic schedule slippage, not catastrophes. Maybe the initial estimates were unrealistic. Maybe the project milestones were fuzzy and poorly defined. Or maybe they changed midstream when the client added requirements or requested new features.

Either way, the result is the same. As the little delays add up, programmers are forced into crisis mode, but their extra efforts are spent chasing goals that can no longer be reached. As the project schedule slips further, so does morale.

Some programmers might be content to work until they drop, but most have families, friends, and personal lives, like everyone else. They'd be happy to leave the office when everyone else does. So instead of praising coders for working long hours, concentrate on figuring out why they have to -- and how it can stop. They'll appreciate it far more than free pizza, guaranteed.

Programming myth No. 3: Great developers are 10 times more productive

Good programmers are hard to find, but great programmers are the stuff of legend -- or at least urban legend.

If you believe the tales, somewhere out there are hackers so skilled that they can code rings around the rest of us. They've been dubbed "10x developers" -- because they're allegedly an order of magnitude more productive than your average programmer.

Naturally, recruiters and hiring managers would kill to find these fabled demigods of code. Yet for the most part, they remain as elusive as Bigfoot. In fact, they probably don't exist.

Unfortunately, the blame for this myth falls on Fred Brooks himself. Well, almost -- he's been misquoted. What Brooks actually says is that, in one study, the very best programmers were 10 times more productive than the very worst programmers, not the average ones.

Most developers fall somewhere in the middle. If you really see a 10-fold productivity differential in your own staff, chances are you've made some very poor hiring choices in the past (along with some very good ones).

What's more, the study Brooks cites was from 1966. Modern software project managers know better than to place too much faith in developer productivity metrics, which are seldom reliable. For one thing, code output doesn't tell the whole story. Brooks himself admits that even the best programmers spend only about 50 percent of the workweek actually coding and debugging.

This doesn't mean you shouldn't try to hire the best developers you can. But waiting for superhuman coders to come along is a lousy staffing strategy. Instead of obsessing over 10x developers, focus on building 10x teams. You'll have a much larger talent pool to choose from, which means you'll fill your vacancies and your project will ship much sooner.

Programming myth No. 4: Cutting-edge tools produce better results

Software is a technology business, so it's tempting to believe technology can solve all of its problems. Wouldn't it be nice if a new programming language, framework, or development environment could slash costs, reduce time to market, and improve code quality, all at once? Don't hold your breath.

Plenty of companies have tried using unorthodox languages to outflank their competitors. Yammer, a social network, wrote its first version in Scala. Twitter began life as a Ruby on Rails application. Reddit and Yahoo Store were both built with Lisp.

Unfortunately, most such experiments are short-lived. Yammer switched to Java when Scala couldn't meet its needs. Twitter switched from Ruby to Scala before also settling on Java. Reddit rewrote its code in Python. Yahoo Store migrated to C++ and Perl.

This isn't to say your choice of tools is irrelevant. Particularly in server environments, where scalability is as important as raw performance, platforms matter. But it's telling that the aforementioned companies all switched from trendy languages to more mainstream ones.

Fred Brooks foresaw this decades ago. In his essay "No Silver Bullet," he writes, "There is no single development, in either technology or management technique, that promises even one order of magnitude improvement in productivity, in reliability, in simplicity."

For example, when the U.S. Department of Defense developed the Ada language in the 1970s, its goal was to revolutionize programming -- no such luck. "[Ada] is, after all, just another high-level language," Brooks wrote in 1986. Today it's a niche tool at best.

Of course, this won't stop anyone from inventing new programming languages, and that's fine. Just don't fool yourself. When building quality software is your goal, agility, flexibility, ingenuity, and skill trump technology every time. But choosing mature tools doesn't hurt.

Programming myth No. 5: The more eyes on the code, the fewer bugs

Open source developers have a maxim: "Given enough eyeballs, all bugs are shallow." It's sometimes called Linus' Law, but it was really coined by Eric S. Raymond, one of the founding thinkers of the open source movement.

"Eyeballs" refers to developers looking at source code. "Shallow" means the bugs are easy to spot and fix. The idea is that open source has a natural advantage over proprietary software because anyone can review the code, find defects, and correct them if need be.

Unfortunately, that's wishful thinking. Just because bugs can be found doesn't mean they will be. Most open source projects today have far more users than contributors. Many users aren't reviewing the source code at all, which means the number of eyeballs for most projects is exaggerated.

More importantly, finding bugs isn't the same as fixing them. Anyone can find bugs; fixing them is another matter. Even if we assume that every pair of eyeballs that spots a bug is capable of fixing it, we end up with yet another variation on Brooks' Mythical Man-Month problem.

One 2009 study found that code files that had been patched by many separate developers contained more bugs than those patched by small, concentrated teams. By studying these "unfocused contributions," the researchers inferred an opposing principle to Linus' Law: "Too many cooks spoil the broth."

Brooks was well aware of this phenomenon. "The fundamental problem with program maintenance," he wrote, "is that fixing a defect has a substantial (20 to 50 percent) chance of introducing another." Running regression tests to spot these new defects can become a significant constraint on the entire development process -- and the more unfocused fixes, the worse it gets. It's enough to make you bug-eyed.

Programming myth No. 6: Great programmers write the fastest code

A professional racing team's job is to get its car to the finish line before all the others. The machine itself is important, but it's the hard, painstaking work of the driver and the mechanics that makes all the difference. You might think that's true of computer code, too. Unfortunately, hand-optimization isn't always the best way to get the most performance out of your algorithms. In fact, today it seldom is.

One problem is that programmers' assumptions about how their own code actually works are often wrong. High-level languages shield programmers from the underlying hardware by design. As a result, coders may try to optimize in ways that are useless or even harmful.

Take the XOR swap algorithm, which uses bitwise operations to swap the values of two variables. Once, it was an efficient hack. But modern CPUs boost performance by executing multiple instructions in parallel, using pipelines. That doesn't work with XOR swap. If you tried to optimize your code using XOR swap today, it would actually run slower because newer CPUs favor other techniques.

Multicore CPUs complicate matters further. To take advantage of them, you need to write multithreaded code. Unfortunately, parallel processing is hard to do right. Optimizations that speed up one thread can inadvertently throttle the others. The more threads, the harder the program is to optimize. Even then, just because a routine can be optimized doesn't mean it should be. Most programs spend 90 percent of their running time in just 10 percent of their code.

In many cases, you're better off simply trusting your tools. Already in 1975, Fred Brooks observed that some compilers produced output that handwritten code couldn't beat. That's even truer today, so don't waste time on unneeded hand-optimizations. In your race to improve the efficiency of your code, remember that developer efficiency is often just as important.

Programming myth No. 7: Good code is "simple" or "elegant"

Like most engineers, programmers like to talk about finding "elegant" or "simple" solutions to problems. The trouble is, this turns out to be a poor way to judge software code.

For one thing, what do these terms really mean? Is a simple solution the same as an elegant one? Is an elegant solution one that is computationally efficient, or is it one that uses the fewest lines of code?

Spend too long searching for either, and you risk ending up with that bane of good programming: the clever solution. It's so clever that the other members of the team have to sit and puzzle over it like a crossword before they understand how it works. Even then, they dare not touch it, ever, for fear it might fly apart.

In many cases, the solution is too clever even for its own good. In their 1974 book, "The Elements of Programming Style," Brian Kernighan and P.J. Plauger wrote, "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?" For that matter, how will anyone else?

In a sense, concentrating on finding the most "elegant" solution to a programming problem is another kind of premature optimization. Solving the problem should be the primary goal.

So be wary of programmers who seem more interested in feathering their own caps than in writing code that's easy to read, maintain, and debug. Good code might not be that simple. Good code might not be that elegant. The best code works, works well, and is bug-free. Why ask for more?

VPS.NET Launches Cloud Application Marketplace Built on Standing Cloud Platform


VPS.NET, a leading provider of public clouds, today announced the launch of its new cloud application marketplace. Built on technology from Standing Cloud, a leading provider of cloud application management solutions, the marketplace gives VPS.NET customers a vastly simplified way to discover, deploy and manage applications in the cloud.

As implemented by VPS.NET, Standing Cloud's turn-key Application Marketplace is an easy-to-use catalog of applications, software, development tools and deployment options, fully pre-configured to run instantly and reliably in the cloud, and optimized for the VPS.NET infrastructure. Once an application is deployed, Standing Cloud’s automated application lifecycle management services give customers an easy way to manage the application on a continuing basis, including monitoring, scaling, back-ups, upgrades, auto-restore and more.

"Making it easy to launch and run fully configured versions of popular products like WordPress or Drupal in just a few clicks is a great value-add for our customers, and Standing Cloud made it easy for us,” said Rus Foster, Managing Director of VPS.NET. “Standing Cloud customized the marketplace with our own look and feel, and taps directly into our API to run seamlessly on our infrastructure."

"Plus,” Foster added, “because Standing Cloud's model-driven engine uses scripted deployments instead of virtual appliance images, we eliminate the cost of image library management while offering more flexibility, portability and reliability to application users on VPS.NET.”

“Cloud-based marketplaces that make it simple for customers to shop for and operate applications in the cloud without the burden of systems administration are dramatically changing the way business applications are consumed,” said Dave Jilk, CEO of Standing Cloud. “VPS.NET had the foresight to see that, and we’re pleased to be working with them to make this vision a reality for their customers.”

New application makes supercomputing simple


Supercomputers are powerful tools for scientists. They are also very expensive, so wasted time can mean a lot of wasted resources. But making the most efficient use of them is not the easiest proposition in the world; it's not just a case of clicking a button to analyse a protein. However, fitting out the world of supercomputers with a user-friendly, web-based interface is the focus of an open source project based at Western Australia's Murdoch University.

Last year Murdoch publicly launched Yabi, a tool equipped with a web interface to make using supercomputers simpler.
The computational physics community, as an example, may be very proficient in the intricacies of shell scripts and working with a command line, says Professor Matthew Bellgard, Director of Murdoch's Centre for Comparative Genomics. "They've had a lot of experience in the past running their Fortran code using 4000 cores or 10,000 cores," he says. However, "there are other domains where scientists don't necessarily have that skill running command line code or porting their code from one supercomputer to another."
Learning at least a smattering of Perl or some other scripting language is often the norm for life scientists, but it can require a significant investment of time to get up to speed and make a supercomputer do what a scientist wants.

"When we started down this particular path of building Yabi, we wanted to simplify access to supercomputing infrastructure for end users. And the end users typically are non-IT-proficient; consider life science researchers or geoscientists," Professor Bellgard says. "While some of them have the ability to write scripts and use programming languages a lot would prefer to be able to just drag and drop and have access to tools that you could access via the command line, but in a web-based environment. "So I guess our first remit was, 'Can we simplify access to high performance computing (HPC) infrastructure?'"

Yabi has already been used to make life easier for scientists studying metagenomics (genomics is the study of the DNA of living organisms, metagenomics looks at the profile of organisms in a particular sample, for example a soil or sedimentary sample). "Metagenomics is a relatively new area and the tools are just being developed. The tools for data analysis of DNA sequences are readily available, but metagenomics is relatively new."

Professor Bellgard says that there have been marked improvements in productivity thanks to Yabi. "We are able to use the Yabi environment to make available tools in a very accessible way for life science researchers."

The team has made resources available for other research communities, for example scientists studying cattle tick genome. "We've created a bioinformatics resource which behind the scenes uses the Yabi environment," Professor Bellgard says.

Although Yabi comes out of a life science environment, the tool can be used across scientific disciplines. One of its strengths, according to Professor Bellgard, is its flexibility.

"Pretty much any tool that can be run on a command line can actually be incorporated into the Yabi environment. So any command line tool whether it's a statistical tool, whether it's a genomics or a bioinformatics tool, whether it be a remote sensing tool, whether it be an astronomy tool can be incorporated."
And although its focus is on HPC, Yabi utilises standard tools and protocols so it can be deployed in non-supercomputing environments. The developers are now targeting the cloud; "we are getting Yabi ready for a cloud based deployment which demonstrates that it is a very scalable system," Professor Bellgard says.

"Imagine a workflow, for example, of 10 different tools that a user might pull together to conduct analysis on some data. The first tool is selecting a file from a computer that is connected to a remote sensor somewhere in outback Australia, then the next tool is running a file format process that may end up being computationally intensive. The third item is a tool that is on a supercomputer local to the researcher.

"The next tool is remotely accessing another supercomputer, then the fourth or fifth tool is actually utilising the cloud resources and the results are stored in the cloud somewhere. There is no lock in so you get to pick and choose where you run the tools or the administrator does that for the users. You do not have to log in to Yabi in order to get access to your data, so you can choose where the results are stored."

Part of the philosophy driving Yabi is not just simplifying workflow management for end users, but for system administrators too. "You can imagine one scientist comes and says 'please install 10 tools for me into this Yabi environment' but then if 50 scientists come and they all have different tools the last thing you want is the bottle neck to be at the system administration level. We can abstract away the complexities for the user but we also want to abstract away the complexities for the sysadmin."

Yabi was open sourced in July last year. "We were mindful not to open source it too early, because there is a level of support that is required for the international community and we were making sure that the system was in a state that would be able to supported by our group," Professor Bellgard says. It has already been deployed at QFAB the Queensland Facility for Advanced Bioinformatics; and the Murdoch team is "in conversation" with scientists at UTS in Sydney as well as at other universities and CSIRO.

Thursday, April 26, 2012

Ubuntu 12.04 Adds countless Open Source OS Enhancements


As Linux fans have undoubtedly noticed by now, the official debut of Ubuntu 12.04 — the latest version of the open source operating system and the first “longterm support” (LTS) release in two years — is only hours away. In preparation, here’s a recap of the most important new features in the release, and what they could mean for Ubuntu and the open source channel in the long term.

What’s New

The full list of new features to look forward to in Ubuntu 12.04 is available from the Ubuntu wiki, but characteristics of greatest note include:

Version 5.10 of the Unity desktop environment. While the interface has been controversial, it should be clear at this point that protests aside, Canonical intends to make it a permanent part of Ubuntu.  Fortunately, recent iterations of Unity are considerably more mature and user-friendly than their predecessors, traits that will hopefully placate many Ubuntu users who were unhappy with the interface in the beginning.

HUD, Canonical’s attempt to redefine the way users interact with applications. This feature debuts in an early form in Ubuntu 12.04 despite the warning that “HUD is in a very early stage of development, and not ready for production use.” Since the inclusion of the HUD does not come at the expense of any other functionality, however, it’s probably safe enough to include it and see how users respond.

Improvements to power-management in the pm-utils package. This is certainly a welcome enhancement for my money, since every extra minute of life I can squeeze out of my tired Dell netbook’s battery counts.

The debut of Kubuntu Active, one of the only serious attempts in the Linux world to develop an interface targeted specifically at tablets (as opposed to desktop environments including Unity and GNOME Shell, which want to work for a range of devices), as a “technology preview.”

There are also a variety of other changes worth noting beyond the desktop, although most of the big-ticket items will affect only desktop users.

Ubuntu Grows More Unique

Most of the features new to Ubuntu 12.04 are relatively minor, and none of the changes radically revolutionizes anything — unsurprising, since an LTS release — which Canonical needs to support for five years — is not one that lends itself to experimentation.

Nonetheless, if there’s one thing that the enhancements to the next version of Ubuntu highlight, it’s the operating system is continuing to diverge from the rest of the pack in many respects. Much of the software at its core is now developed by Canonical itself, with little reliance on upstream projects. Regardless of whether one likes it, Ubuntu is no longer simply Debian with a few user-friendly tweaks; with Unity, HUD and other independently developed features, it’s increasingly becoming a beast of its own.

Apache announced Cassandra NoSQL Database 1.1


The Apache Software Foundation (ASF) announced the release of Apache Cassandra 1.1, the highly scalable open-source distributed database.

Cassandra 1.1 handles massive data sets across community machines, large server clusters and data centers without compromising performance, and it does so running in the cloud or partially on-premise in a hybrid data store. Apache Cassandra 1.1 delivers improved caching, revised query language (CQL–Cassandra Query Language–a subset of SQL), storage control, schema/structure, Hadoop integration/output, data directory control and scalability.

"Apache Cassandra is the leading scalable NoSQL database in terms of production installations—the 1.0 release was a huge milestone," said Jonathan Ellis, vice president of Apache Cassandra, in a statement. "Version 1.1 improves on that foundation with many features and enhancements that developers and administrators have been asking for."

ASF officials said Cassandra is gaining attention as a best-of-breed "NoSQL" solution for its ease of use, powerful data model, enterprise-grade reliability, tunable performance and incremental scalability with no single point of failure. Cassandra accommodates high query volumes at high speed (sub-millisecond writes) with low latency, and handles petabytes of data across formats and applications in real time.

As it can handle thousands of requests per second, Apache Cassandra is deployed at a wide variety of enterprises, including Adobe, Appscale, Appssavvy, Backupify, Cisco, Clearspring, Cloudtalk, Constant Contact, Digg, Digital River, Expedia, Formspring, IBM, Mahalo.com, Morningstar, Netflix, Openwave, OpenX, Palantir, PBS, Plaxo, Rackspace, Reddit, RockYou, Shazam, SimpleGeo, Spotify, Twitter, Urban Airship, U.S. government agencies, Walmart Labs, Yakaz and more.

The ASF said the largest Cassandra production cluster to date exceeds 300 terabytes of data over 400 machines.

"The v1.1 release shows how rapidly Apache Cassandra has matured,” Patrick McFadin, chief architect of Hobsons, which offers CRM solutions to the education market, said in a statement. “The focus has clearly shifted to usability which is the sign of a solid system. I look forward to getting it into production right away. With features like Row-level isolation and Composite keys, Apache Cassandra v1.1 is really addressing user-driven needs with innovative solutions. Well done to all contributors for making this a great release."

Wednesday, April 25, 2012

Amazon's Cloud Software Rental Business advances Open Source Apps


Late last week, Amazon Web Services (AWS) announced that it was diving into the software rental business in the cloud. Through the AWS Marketplace store, users can rent software from IBM, SAP, Microsoft and many other providers, and payments are made in the cloud. While the New York Times and many other media outlets that covered the news emphasized the proprietary software that AWS Marketplace is renting from commercial providers, there are also many open source applications for rent on the site. Renting freely available open source software is the latest shrewd move from Amazon.

As the New York Times noted: "The presence of I.B.M. in Amazon’s service indicates that some companies are already hedging their bets [regarding their support for open source cloud platforms]." Indeed, the AWS Marketplace has buy-in from a number of proprietary software players who are already backing OpenStack, Eucalyptus and other open source cloud platforms. But Amazon has smartly made available many open source applications for rent, where users don't have to wrestle with installation and can jump right into usage.
For example, for a cent or two an hour, users can leverage a LAMP stack, Ruby on Rails is priced similarly, and about five cents an hour gets you Ubuntu 11.10. Some may scoff at this scheme involving renting software that users can get for free, but Amazon is clearly betting on there being a market for people who will happily pay very small fees to avoid installation hassles and jump right into applications.

In a number of instances, the applications on AWS Marketplace themselves are available at no charge, and customers will just end up paying small fees for cloud storage and resources. Amazon didn't become the 800-pound gorilla in the cloud by accident, and the company's new software rental business will probably be a hit.

NGINX launched the latest Web Server


SAN FRANCISCO, CA and MOSCOW, Apr 24, 2012 (MARKETWIRE via COMTEX) -- Open source developer Nginx, Inc. today announced the cutting-edge version of its widely deployed web server is now available for download. This milestone release version 1.2.0 is a culmination of NGINX's annual development and extensive quality assurance cycle, led by the core engineering team and by the web server's enthusiastic user community. NGINX 1.2.0 is the latest production-quality release of the stable branch, incorporating over 40 new features and over 100 bug fixes introduced since April 2011.

"This recent release concludes a significant phase of development. We have made tremendous progress, streamlined our development efforts as a company, and are excited to see a quickly growing number of companies using NGINX as a modern and extremely cost-effective solution for application routing and acceleration," said Igor Sysoev, the author of NGINX web server and the founder of Nginx, Inc. "We will continue to work on a variety of extensions based on feedback from our extraordinary user community."

The second most popular web server for active sites on the Internet, NGINX ("engine x") has grown its market share of enabled domains 200% since April 2011. Successful companies such as Pinterest, Instagram, CloudFlare, Airbnb, WordPress, GitHub, SoundCloud, Zynga, Eventbrite, OMGPOP, ModCloth, Heroku, RightScale, Engine Yard and many others rely on NGINX to build highly valued online services.

Underscoring its expanding acceptance as mainstream web infrastructure software, version 1.2.0 reflects NGINX's focus on performance, scalability, reliability and security. In addition to reinforcing software development and quality control processes, the company has launched an online code browser and a bug-tracking system for the users of NGINX, and has devoted increased resources to the ongoing improvement of the documentation.

Tuesday, April 24, 2012

"Super duper program update release" of Parted Magic


Parted Magic lead developer Patrick Verner has announced the arrival of a "super duper program update release" of his open source, multi-platform partitioning tool. Labelled "2012_04_21", the new version of the Linux distribution for disk partitioning and data recovery is based on the 3.2.15 kernel and upgrades more than 25 of the bundled programs and utilities.
These include version 1.42.2 of the e2fsprogs filesystem utilities, GNU Parted 3.1, Clonezilla 1.2.12-37 and version 0.12.1 of the Gnome PARTition EDitor (GParted). The NTFS-3G read/write driver, the TestDisk and data recovery tools, the flashrom utility for flash chips and smartmontools have also been upgraded to the latest stable versions. Other changes include the addition of new "nopmodules" and "noscripts" kernel command line options, and fixes for some wireless device drivers as well as problems when resizing FAT16/32 partitions.
A full list of changes and program updates can be found on the news page and in the change log. Parted Magic 2012_04_21 is available from the project's downloads page. Hosted on SourceForge, Parted Magic is licensed under the GPLv2.

Open source firm Opsview update IT monitoring lens


V4 gets enhanced data visualisation, dashboarding; Opsview Pro targets SMB market

Privately held, open source IT monitoring firm Opsview has revved its platform with data visualisation and dashboard technology, saying it better helps firm monitor their physical, virtual and hybrid cloud environments.

Opsview has around 19,900 customers using its free open source offering and a further 100 customers paying for all the bells and whistles as well as a support package in the shape of the Enterprise version.

Founder and CEO Michael Walton told CBR the company is also launching Opsview Pro, aimed at the small to medium business market and somewhere between the open core and Enterprise version in terms of features and functions.

V4 can monitor cloud-based applications from the likes of Salesforce.com by tapping into their published APIs. Walton said the focus of the latest release was around the front end - visualisation and dashboarding - as well as some performance tweaks. Improved customisation is said to make it easier for users to identify and diagnose system incidents.

Versus the competition from industry giants like CA, BMC, IBM and HP, Allen said the firm's relatively small size plays as an advantage as it can be more nimble, whilst its open core licensing model means there is a sizeable community of users who offer valuable feedback, add-ons and testing.

Walton claimed the firm doubled its revenue in the past 12 months and expects to double it again in the next 12. It will approximately double headcount this year to around 60 staff worldwide, predominantly in the UK, US and with some development staff in India.

Helping Walton guide Opsview's strategy on the board of directors is Stephen Kelly, formerly CEO of Chordiant in the US and then Micro Focus in the UK.

Describing Opsview V4 Walton said, "We have completely re-engineered Opsview to provide users with a more consistent and user-friendly experience. With more than 30 major enhancements in this new release, organisations will now be able to identify, diagnose and fix systems incidents with even greater accuracy and speed."

Opsview's customer roster includes Ericsson, Electronic Arts, Allianz, Binck Bank, Equiduct, Dimension Data, Irish Revenue, and Cornell and Yale Universities.

Monday, April 23, 2012

Wordpress used to create game publishing platform


You would be forgiven for thinking Wordpress was just a tool for bloggers. For many this is as far as their use of the open source platform would go, but Dan Milward has taken the highly popular CMS and built a gaming system on top of it.

Milward is the founder of Wellington-based Instinct Entertainment, best known for creating the popular WP e-Commerce plugin, which has had more than 1.7 million downloads.

Tomorrow Milward’s team at Instinct are launching their latest creation called Gamefroot, an HTML5 game creator built using Wordpress’ database management system.

Milward demoed Gamefroot at Wordcamp NZ on Saturday for a crowd of Wordpress users and developers. The platform lets users create 8-bit style games using predefined sprite sheets, or by uploading their own graphics. You can give objects qualities like health or damage, add animations, and even add background music by uploading your own or choosing from music composed by Instinct.

The game’s information is stored as custom posts in the database, which when looked at from the back end would be very familiar to Wordpress users.

The Gamefroot project has been more than five years in the making, and is based on a previous product from Instinct called Game Creator.

Game Creator used Flash Lite to create platform games using pre-constructed tiles and sprites. Milward says within a few days of launching Game Creator users had created 350 Flash Lite games, more Flash Lite games than there were all together previously.

“This really proved to us the concept could work,” says Milward.

Since then Instinct has received funding from a Japanese investor to develop Gamefroot. Although initially reluctant to talk about opportunities to monetize the platform, Milward says if games are popular enough Instinct will team with the creator to take the game data and publish it for the Apple App Store, splitting any revenue made.

WampServer delivers a smart, Windows-friendly platform for Apache, MySQL and PHP-based apps


While many popular website applications (WordPress, Drupal, Joomla, etc.) are open source and therefore freely available, running these PHP-based apps on a Windows IIS web server requires a bit of retrofitting.

Although Microsoft has streamlined the process of installing and configuring the PHP scripting language on IIS 7.0, many web administrators consider the fix, which involves enabling FastCGI extensions, too risky for production environments. Others simply wish to set up an independent test environment for evaluating open source apps.

Follow Network World's open source community

Moreover, PHP extensions are not the only hurdle for Windows webmasters. A large number of PHP-based open source apps rely on backend databases (MySQL, Maria DB, PostgreSQL, etc.) that also need special handling to run on Windows.

Enter WampServer, an open source product that installs a PHP-apps-ready platform consisting of Apache web server, MySQL database, PHP, plus several helpful GUI-based utilities. WampServer can be installed on virtually any version of Windows, either desktop or server. With an active user community, industrial-grade training programs and a large installed base, WampServer is one of the world's most popular Apache-MySQL-PHP distributions.

Saturday, April 21, 2012

Microsoft also wants Open Source Community

Microsoft has had a notoriously rocky historical relationship with the open-source community but the recent creation of a specialized Open Technologies spinoff has brokered some change.

Friday, April 20, 2012

W3m: Simple Text-Based Web Browser


w3m is simple Text-Based Browser support for SSL Connetions, Tables, and frame color and inline images on suitable terminal. Generally, it renders pages in a form as true to their original layout as possible. With w3m you can browse web pages through a terminal emulator window (xterm, rxvt or something like that). Moreover, w3m can be used as a text formatting tool which typesets HTML into plain text.

W3m Commands also running on Unix Based (Solaris, SunOS, HP-UX, Linux, FreeBSD, and EWS4800)

Generating Ext JS and Java CRUD Applications with CDB.

Clear Data Builder for Ext JS (CDBExt) is an open source tool that automatically builds Ext JS/Java EE CRUD applications given one or more annotated Java interfaces. The generated JavaScript and Java code enforce best Ext JS and Java EE practices and is deployed on the development version of the Tomcat ready to run. A tiny library of Ext JS components accompanying CDBExt – Clear components – enables transactional data sync with the application server, including deeply nested hierarchical data transaction, features not supported in native Ext JS 4.

Thursday, April 19, 2012

Google's Chrome OS Aura


This new interface, Aura, is both a new desktop window manager and shell environment. Aura is an optional replacement for last year’s Chrome OS single Web browser interface. With it you can have multiple, small browser windows, each with its own set of tabs, against a desktop screen background. These windows can be overlapped like the Windows on older desktops such as GNOME 2.x, Windows 7 or Mac OS X.

You also get, like OS X’s dock, a status bar on the bottom of the screen with icons for each of the open windows and system status displays for as the clock and battery. When you maximize a window to full screen, the task bar vanishes. You can always bring it back though by moving your pointer down to the screen’s bottom.

You can also tear off browser tabs and drag their windows to new positions or merge tab with another window’s tab strip. Each window also gets a rectangle in the upper right that you can toggle to switch between its maximized and smaller displays. You can also resize windows by dragging any edge. If you click on an icon in the dock you’ll see icons for all your installed apps and bookmarks. You can also find these on Chrome OS’ new-tab page.

If Aura sounds familiar, it should. While so many of the new operating systems want to force you to have only one application at a time in front of your face, Chrome OS is returning to the older way of enabling you to have multiple windows up at once. This is a win as far as I’m concerned.

This latest edition of Chrome OS, 19.0.1048.17, includes numerous security fixes. In addition, it now includes better support for multiple monitors and it provides native support for the tar, gz, and bzip2 compressed file archive formats.

This is a developer’s release, so it’s for power-users only. While technically it’s only available for users who already own a Acer AC700 or Samsung Series 5 Chromebook–the first Chromebooks, the CR-48s, are not supported in this release-you can also run Chrome OS from a USB stick or in a VirtualBox virtual machine.

Red Hat warily hopeful about Microsoft's Open Technologies Inc


Linux leader Red Hat today applauded Microsoft’s recent launch of an open technologies subsidiary but is clearly taking a wait and see attitude.

Naturally. In the past, Microsoft described Linux and open source as a “cancer” on the software industry. Red Hat pointed out that the path to openness was not without opposition. With that it mind, Red Hat today penned a warily hopeful response to the news.

Tuesday, April 17, 2012

Citrix Throws a Spanner in the Cloud Market


Citrix has announced that it is submitting the popular CloudStack platform to Apache Foundation. This comes as a surprise as Citrix has acquired Cloud.com only last year by investing a few hundreds of millions of dollars!

With this, Citrix is also dumping its own distribution of OpenStack called Project Olympus. This move has shaken the industry and the analysts. Till now, OpenStack enjoyed all the attention and is aiming to become the Open Source Cloud monopoly. Though Eucalyptus has been around for a while, it failed to garner as much industry support as OpenStack. With CloudStack moving to Apache Foundation’s stable, there comes a credible and serious open source Cloud offering which can be a true alternative to OpenStack. This announcement will certainly force Rackspace to relook at their strategy. Rackspace has been successful in rallying the bigwigs of the industry including NASA, AT&T, HP, Dell, NetApp, Akamai, SoftLayer and Internap around OpenStack with a hope to take Amazon Web Services head on. But the latest announcement from Citrix changes the whole equation!

Open Source CloudStack 3.0 Is Coming


Over the last year I have been working on the CloudStack Open Source Cloud Computing project. This month we are getting ready to launch CloudStack 3.0 which really raises the bar for cloud computing platforms.  So what is CloudStack? It is an infrastructure-as-a-service(IaaS) platform that orchestrates virtualized servers into an elastic compute environment. The project was originally developed by Cloud.com and is now sponsored by Citrix since they acquired Cloud.com in July of 2011.

How Open Source Comments (by Programming Language)

We recently looked at the commenting practice of active working open source projects. It is quite impressive: The average comment density of open source is around 19%. (Comment density = comment lines / (comment lines + source code lines)). That is much more documentation than most people thought!

However, such a rough number needs discussion. Here, I look at the comment density on a programming language basis. As it turns out, the comment density of active open source projects varies by programming language. Java is leading the bunch, but that needs further discussion.

Read More..

10 great websites for PHP code snippets and tutorials

1. PHP Builder - Offering articles about the different features of PHP, as well as a code library with free to use code snippets for your own website and applications.

2. PHP.net - The official website of the PHP programming language. They offer a complete documentation of PHP, downloads of the latest PHP version, and news.

3. The Basics - Learn the very basics of the language, from loops and arrays to sending email messages with PHP.

4. Pixel2life - Offers some more extensive guides, with tutorials about shopping carts, file uploads, security and more. You can also find tutorials on lots of other platforms like Flash, Photoshop and 3D rendering.

5. WeberDev - 2800+ tutorials and code examples for PHP. You can find information here for almost any aspect of PHP coding, including beginner guides.

6. PHP at W3C Schools
- A reference guide for some of the most common functions of PHP. They also offer a guide for setting up a MySQL connection and updating your database.

7. PHP Freaks
- Useful PHP tutorials and a forum where you can ask questions to other members of the community.

8. Devshed - 400+ more advanced tutorials on PHP, MySQL and lots of other programming languages. Their website Scripts.com offers 1000+ scripts, to use in your website and to build upon.

9. Planet PHP
- The latest news aggregated from PHP related blogs, keeping you up to date with new versions, security flaws, etc.

10. Hotscripts - 15000+ PHP scripts. Most of the scripts are free, but there are also paid scripts available on this website. There are lots of other languages available for download as well, like ASP, Ajax and Javascript.

Read More...

Monday, April 16, 2012

Programming for mind management

A wise king wanted advice about what to do when he faced a real crisis in his life. He met his master. The master gave him a locket and told him to open it only when he was in difficulty. After some years he was defeated in a battle and he lost his kingdom.

He was worried and was hiding in a cave. He suddenly remembered what his master had told him. He opened the locket and there he found a note, "even this will pass", and a second note which said, "if one door closes another door opens". He gained strength from this piece of advice and after a month won his kingdom back. He went to his master and thanked him and asked a question: "Why does the door close?" "The door is always open, your mind is closed," replied the master.

There are two types of worlds we live in. The visible world and the invisible world. The outer world and the inner world. Our body is in this outer space but our mind is in another space. Just as our body gets dirty if one is in a dirty space our inner self, our mind, gets dirty if we visit dirty areas in the inner world. If one broods over worry, dislike, anger, jealousy, then our mind is polluted. With such pollution you find the outer world a pain. The real pain is when our mind lives in negativity.

What type of world are we in? This is what one should really be aware of. Where are we placed psychologically is what we have to watch for. One believes that the outer world is the real world and not the inner world of mind.

Read More..

IBM & Red Hat Will Reportedly Join OpenStack


IBM and Red Hat are supposed to step out of the background and officially join the OpenStack movement.
According to GigaOm, they could sign up this week or next week at the OpenStack Spring Conference.
IBM was on a list of contributors to the open source cloud project in February. Red Hat reportedly did a lot of the packaging for the new Essex release of OpenStack that should make it easier to bundle in Linux distributions like Fedora and Ubuntu.

Sunday, April 15, 2012

Release of OpenNebula 3.4 with the Most Powerful Open Cloud Solution for VMware


OpenNebula 3.4 is the most feature-rich open-source alternative to VMware datacenter and cloud suite, delivering enterprise-class functionality, stability and scalability with broader platform support and integration capabilities for KVM, Xen and VMware hyper visors.

OpenNebula is proud to announce the release of a new stable version of its widely-deployed open-source management solution for enterprise data center virtualization. OpenNebula 3.4 delivers the most feature-rich, customizable solution to build enterprise virtualized data centers and private clouds on Xen, KVM and VMware, providing cloud consumers with a choice of interfaces, from open cloud to de-facto standards, like the Amazon API. This new release brings countless valuable contributions from many industry members of its large user community, including Research in Motion, Logica, Terradue 2.0 and CloudWeavers, and from research and academia, especiallyClemson and Vilnius Universities.

Red Hat and 10gen Create Compelling Open Source Data Platform


Red Hat announced a partnership with 10gen as the first NoSQL database solution for its OpenShift PaaS offering in 2011. As enterprises increasingly look for proven solutions to handle big data needs, Red Hat and 10gen have broadened their collaboration around NoSQL, helping developers to deliver on the promise of big data, Internet and cloud technologies working with Red Hat solutions.
"Web and enterprise developers need solutions that allow them to rapidly deploy applications that deal with large amounts of data in flexible public or private cloud environments," said Scott Crenshaw, vice president and general manager, Cloud Business Unit at Red Hat. "Combining Red Hat's technology stack with 10gen's MongoDB NoSQL database will help developers to deliver on the promise of big data and cloud technologies."

Saturday, April 14, 2012

FuseSource Launches Beta Program for New Open Source Integration and also Messaging Platforms


The beta program includes:

Fuse ESB® Enterprise 7.0 is an enterprise-class, open-source integration platform based on FuseSource offerings that have been proven in mission-critical deployments across hundreds of large enterprises. Customers can purpose-fit the highly flexible Fuse ESB® Enterprise platform to virtually any environment.

Fuse™ MQ Enterprise 7.0 is an open-source, standards-based messaging platform that can be deployed in any development environment with a very small footprint. The Fuse™ MQ Enterprise platform makes it easy for IT organizations to deploy and connect customized integration connections.