Tuesday, June 26, 2012

Brightcove Open Sources App Cloud, Bets Big On Dual-Screen Apps For Apple TV


It was just a year ago that video distribution specialist Brightcove announced it was entering a new market, with a cloud-based product for quickly building multiplatform mobile applications called App Cloud. Well at this year’s annual Play conference, Brightcove will make App Cloud available for free with an open source license. It’s also making a set of tools available that it hopes will push forward the dual-screen applications market.



Brightcove’s App Cloud is basically like a WYSIWYG editor for app development, designed to allow developers to roll out rich apps for both iOS and Android mobile phones and tablets with a minimum amount of actual programming required. App Cloud provides templates to get developers up and running quickly, but which can also be customized with native code. Once built, App Cloud also contains Brightcove’s Workshop debugging tool, so that users can test and preview apps natively on the device and make changes in real-time, before submitting to the App Store or Google Play.

The new free version of Brightcove’s App Cloud is being made available with an open source SDK, to enable developers to build, test, and debug an unlimited number of apps on their devices. They can compile them, and make them available for Apple iOS and Google Android devices.

It’s more or less a freemium model for app building. With App Cloud Core you can build and release as many apps as you want. But if you want features like real-time analytics, push notifications, and native ads, you can upgrade to App Cloud Pro for $99 a month. And for those who need an even more robust feature set, there’s an enterprise version for high-volume apps with custom pricing plans based upon usage.

In addition to open sourcing App Cloud, it’s also pushing one particular feature set, which could change the way we watch TV. Its App Cloud Dual-Screen Solution for Apple TV uses a set of APIs that will allow tablet and mobile users to have a truly integrated second screen experience. By leveraging Apple’s AirPlay technology, App Cloud users can create applications that use the mobile device as the search and navigation, while the Apple TV plays back video.

While developers today can already build robust dual-screen apps that connect the iPad or iPhone and Apple TV, few apps actually take advantage of the capability. Brightcove’s Dual-Screen solution simplifies the creation of these apps, enabling a whole new level of interactivity for app makers to unlock. It also has the potential to shift viewership habits. Today, most TV apps think of the mobile phone or tablet as the “second screen,” but with these dual-screen apps, Brightcove CMO Jeff Whatcott says it’s the mobile device which is the first screen, since it is being used to control the experience, and it is also where all the program information appears.

Intel releases Ivy Bridge programming docs


Last week Intel released the graphics programming documentation and register specifications for Ivy Bridge processors. This Ivy Bridge graphics core programming documentation spans 17 files spread across three volumes and 2,468 pages of technical details concerning their latest-generation graphics, reported Phoronix. The complete public documentation can be downloaded for free here.


“While Intel has a large team of developers within their Open-Source Technology Center working on their open-source Linux graphics driver, they continue to produce very detailed programming documentation for the public. These documents cover the key registers for their hardware and other information to benefit anyone wishing to get into low-level graphics driver programming or just wanting to better understand how Intel's latest graphics core works,” the report reads.

 Intel has put out documentation on their graphics chips for several generations now. The Ivy Bridge processors have been available since April, and the open-source Linux code for the graphics driver has been available for more than a year, but they finally received permission to do the public drop of their programming documentation.

This Intel HD Graphics Open Source Programmer’s Reference Manual (PRM) describes the architectural behaviour and programming environment of the Ivy Bridge chipset family. The Graphics Controller (GC) contains an extensive set of registers and instructions for configuration, 2D, 3D, and video systems. The PRM describes the register, instruction, and memory interfaces, and the device behaviours as controlled and observed through those interfaces. The PRM also describes the registers and instructions, and provides detailed bit/field descriptions.

This documentation is divided into four volumes containing 17 PDF files. The first volume covers the graphics core, MMIO registers and programming environment, memory interface, commands for the render engine, blitter engine, the video codec engine command streamer and the GT interface register. The second volume covers 3D media pipeline, L3 cache/URB, media and general purpose pipeline and the multi-format transcoder. Volume 3 covers VGA and Extended VGA registers, PCI registers, and the north/south display engines, while the fourth volume covers subsystem cores - message gateway, URB, video motion estimation, and execution unit ISA.

“It's nice to see that even the Ivy Bridge execution unit is covered with this documentation for doing general purpose computing (GPGPU). At this time that's one of the missing features of the open-source Intel Linux driver,” the report added.

The Ivy Bridge represents a ‘Tick’ in Intel’s ‘Tick-Tock’ cycle of upgrades. The ‘Tick’ stage is merely a die shrink of their current architecture - in this case Sandy Bridge. Ivy Bridge is still based on the same microarchitecture as Sandy Bridge, with the only difference being it’s based on the 22nm fabrication process instead of 32nm. Of course, it’s not just a die shrink as Intel has added some new features as well to Ivy Bridge. Ivy Bridge re-uses the LGA 1155 from the Sandy Bridge, making it backwards compatible with Sandy Bridge. Additionally it supports Intel’s Rapid Storage 11 technology, native USB 3.0 support, up to three display support via the integrated graphics card, PCIe Gen 3, UEFI BIOS and DirectX 11 support.

Monday, June 25, 2012

France awards €2 million open source support tender


The central IT department for the French government has granted a €2 million contract to support 350 different open source tools throughout fifteen different ministries. The three to four year contract, which was officially tendered last year, was awarded to consulting companies Alter Way, Capgemini and Java specialist Zenika.



The supported software and technologies include several Linux distributions such as Ubuntu, Debian and CentOS and programs including Firefox, OpenOffice, LibreOffice, OpenERP, Nagios, Drupal. Programming languages such as PHP and Python are also within its scope.

The government's IT department is requiring the companies to give back improvements made to the open source code they are supporting to the respective communities as part of the contract. The current contract only covers bug fixes and maintenance of existing installations, development of new features will be covered by a new tender which has not yet been published. Not participating in the contract is the French Ministry of Economy, Finances and Industry which has awarded its own open source support contract.

Eight years ago, a similar contract had failed, resulting in "expensive proprietary software solutions" being rolled out throughout the French government, according to the news site LeMagIT. Therefore, aside from saving costs, increased and more sustainable adoption of open source tools is a priority.

Open Source 9 big data technologies


Big Data is booming these days, as more and more companies realize the benefit of storing data and leveraging it for useful insights. At the forefront of this Big Data revolution is Open Source technology, since majority of Big Data companies prefer it over closed source technology. Here are nine open source Big Data technologies that you should keep an eye on:

Apache Hadoop
Apache Hadoop was originally created by Dough Cutting in order to support his work on Nutch, which is an open source Web search engine. Hadoop is basically a MapReduce facility and distributed file system merged together, and was designed initially to meet Nutch’s multimachine processing requirements. The basic principle behind Hadoop is that it splices and distributes big data over a series of nodes running on commodity hardware.

R
Designed by Ross Ihaka and Robert Gentleman at the University of Auckland, NZ in 1993, R is an open source programming language that became the de facto standard for statistical analysis of very large data sets, as it is specially designed with statistical computing and visualization in mind.

Cascading
Cascading is an open source abstraction layer for Hadoop, that works as an alternative to MapReduce. Cascading allows the execution of data processing workflows using any JVM based language, with the goal of concealing the inherent complexity of MapReduce jobs, in order to make it easier for people who don’t need or don’t want to bother with the nitty gritty of log file analysis, bioinformatics, machine learning, and other MapReduce jobs.

Scribe
Developed and released last 2008 by social media giant Facebook, it was designed to aggregate log data that is streamed in real time from a large number of servers. The original purpose was to handle Facebook’s own scaling problems. So far, Scribe has been successful and is currently handling tens of billions of messages a day.

ElasticSearch
ElasticSearch is an open source search server developed by Shay Bannon and based on Apache Lucene. ElasticSearch’s main selling point is that it doesn’t require a special configuration and is perfectly scalable while still supporting near real-time search and multitenancy. It is currently used by a number of high profile companies, particularly Mozilla and StumbleUpon.

Apache Hbase
Designed to run on top of Hadoop’s Distributed Filesystem, Apache Hbase is an open source, non-relational columnar distributed database that is modeled after Google’s BigTable. Hbase’ most notable user is Facebook, which adopted the platform last 2010 for use in its messaging service.

Apache Cassandra
Another one of Facebook’s aces, Apache Cassandra was originally developed as a NoSQL data storage solution that will power the social network’s Inbox Search Feature. Facebook has since abandoned Cassandra in favor of Hbase, but it is still being used by a number of high profile companies such as Netflix, particularly as a back end DB for their streaming services. Cassandrai s currently available under the Apache License 2.0.

MongoDB
MongoDB is a popular open source NoSQL data store that uses structured data in JSON-like documents using a dynamic schemas called Binary JSON. Created by the founders of DoubleClick, MongoDB is currently used by several large enterprises such as Craigslist, Disney Interactive Media Group, Etsy, The New York Times, and MTV Networks.

Apache CouchDB
Yet another open sourche NoSQL DB, CouchDB uses a blend of JSON, Javascript, MapREduce, and HTTP to store and query data. The platform was originally created in 2005 by former IBM developer Damien Katz as a storage protocol for large scale objects. One of CouchDB’s more popular users is The British Broadcasting Corporation, which uses it for their dynamic content platforms.

Thursday, June 21, 2012

Open Source CRM Alternative X2Engine Torques Marketing Campaigns


X2Engine Adds Intuitive yet Powerful Marketing Capabilities to its Popular Open Source CRM Alternative

Today X2Engine releases the latest version of X2Engine CRM which includes intuitive yet powerful marketing capabilities as well as a variety of other key enhancements. X2Engine CRM 1.5 enables users to quickly and easily create, manage and analyze highly targeted marketing campaigns.

The integrated marketing capabilities in X2Engine CRM provide users with an intuitive, flexible platform from which they can direct targeted marketing emails and other communications to prospects and customers. Users start by creating dynamic, targeted lists by simply filtering X2Engine contacts based on any field or combination or fields. With flexible WYSISYG and HTML editors users develop and preview campaign content. After launching the campaign, metrics such as number opened, viewed and clicked enable users to measure campaign success and further target groups of prospects based on their reaction to the marketing campaign.

“It's critical that sales and marketing are closely aligned and interacting with prospects and customers in a truly consistent, unified manner,” said John Roberts, X2Engine Founder and CEO. “With the latest enhancements X2Engine CRM provides organizations with the ability to rapidly develop, launch and manage targeted marketing programs, while providing a comprehensive view of the prospective customer including every interaction.”

X2Engine CRM 1.5 also has a variety of other key enhancements including Integration with Google Apps: X2Engine CRM utilizes OAuth, an open standard for authorization, enabling users to login to X2Engine CRM with their Google UserID and access their Google Apps data and functionality.

Data cleansing tools: Advanced data cleansing tools ensure data integrity including removal of duplicate records.

Live Notifications: Immediately see when new notifications have arrived, as well as how many are awaiting you.

X2Engine CRM Core Modules
Web and Facebook Lead Capture Forms
Lead Nurturing, Scoring and Intelligent Routing
Contact Activity Management
Sales Process Work-flow Engine
Email Correspondence
Product and Sales Quotes
User Profile Pages and Activity Streams
Field Security, Roles and Sales Teams
Visual Form Editor for Admins
Reporting Dashboard
iPad and Mobile Device Apps

Open Source, the Fuel for Cloud Disruption


Open source has proven to be a good option for building, managing, and delivering scalable infrastructure-as-a-service (IaaS) clouds and platform-as-a-service (PaaS) clouds. Typically, most open source cloud platforms support multiple virtualization technologies, giving enterprises a range of choices from multiple vendors of closed as well as open source technologies. Some examples are Eucalyptus, OpenStack, Cloud Foundry, OpenNebula, Red Hat OpenShift, Xen Cloud Platform Project (XCP), and the newest kid on the block, Citrix Cloudstack 3. While some apprehension still exists around open source use, there is a shift in attitude as enterprises look to capitalize on efficiency and technologies like virtualization and cloud computing as these become highly essential components in IT architecture.

OpenStack is a massively scalable cloud operating system that helps in the delivery and management of infrastructure. OpenStack is initiated by Rackspace and NASA, and is supported by almost 180 organizations, including Intel, Dell, Canonical, AMD, Cisco, HP, SUSE Linux, Red Hat, and IBM. It is a collaborative effort by thousands of developers and technologists globally aimed at helping SMBs, service providers, data centers, corporations and researchers roll out and leverage industry-grade public and private clouds.

Cloud Foundry is an open PaaS initiated by VMware enabling users to choose from multiple deployment clouds, development frameworks, and application services. Eucalyptus has been in the space for more than three years and it helps with implementing IaaS clouds. It also provides a great hybrid cloud deployment option since it supports Amazon AWS application programming interfaces to build and deliver applications atop. Red Hat's OpenShift, an auto-scaling PaaS, is also getting traction and support from many organizations. XCP helps with server virtualization and building cloud platforms for enterprises using Xen hypervisor. OpenNebula is another open source standard for data center virtualization. It offers customizable solutions for the management of virtualized data centers based on Xen, VMware, and KVM. Citrix CloudStack on the other hand is supported by almost 50 organizations including software and service providers like RightScale, Engine Yard, Opscode, CumuLogic, Puppet Labs, Hortonworks, Equinix, Juniper Networks, and ScaleXtreme.

Today, the above referenced open source platforms and technologies are being adopted and leveraged by small to large organizations for different reasons and for different capacity. Some are being used to create and deliver internal infrastructure, applications or workloads; while others are being leveraged to build public cloud services. Similarly, the open source virtualization platforms like Xen and KVM are already a critical piece of many cloud solutions today. One such example is Amazon Web Services (AWS) cloud platform. AWS IaaS cloud is perhaps the most popular public cloud - estimated to be a $1 billion cloud business.

Amazon's core cloud service EC2 (compute service) is powered by Xen. This alone should provide some reassurance to people who are still skeptical, but despite all the advancements in open source cloud platforms, CIOs have been apprehensive about open source software because of the absence of a formal support infrastructure. Some other concerns include security, lack of proper roadmaps, complexities tagged with IP rights, and capabilities to evaluate and assert endorsements to open source projects. But it's not all negative - open source innovations are acting as a catalyst to cause positive shifts in computing paradigm; ultimately helping CIOs, and we continue to see the market size of open source cloud software get bigger day by day.

Open source is also driving another interesting change. When it comes to contributing to open source initiatives, it's all about co-creation. Today, CIOs or service providers are all expected to "do more with less," which results in tight constraints on budgets. Timelines foster the culture of co-creation through collaborative efforts rather than competition. A good thing is that now organizations are getting multiple open source options to solve problems and they are able to choose the specific one that will best help meet their requirements. We've reached a point where it's important for organizations to create a strategy around open source platforms, and target the ones that align with business propositions that help meet strategic goals rather than focusing on tactical goals only. In the context of cloud computing, experts believe that open source is promising to make the technologies behind the cloud a commodity.

Sometimes people tend to misunderstand the cost model associated with open source and hence fail to account for the inevitable costs as well. There seems to be confusion between a no-cost vs. low-cost option. Although open source software doesn't incur any cost for acquisition since there is no license fees or annual charges, organizations still need to account for administration and support costs. In any case compared with commercial software platforms, total cost of ownership will be significantly lower.

Security has always been a talked-about issue with open source and it's a very valid concern. In the new era of open source innovations, the fact that communities and participations sponsored by technology organizations are not only getting stronger, but also fueling the advancement through co-innovation patterns, has changed the perspective on open source security as well. Open source software is available to anyone and everyone to use and work with. This means a large community of developers globally contribute to the code; they inspect it, review it, test it for various scenarios, and analyze the code for vulnerabilities. This process makes the open source software more secure, delivering the quality software back, to test and inspect it again and again. It keeps evolving all the time, while intelligence is added by many brains. In the context of a commercial software product, the vendor organization helps you solve the problem; and in the open source scenario, perhaps the whole technology world conspires toward your success.

There are other advantages of open source in cloud as well as virtualization - affordability (lower TCO), flexibility to customize, transparency in the stack, no vendor lock-in, better interoperability, and commitment toward portability when it comes to migrating workload. Having a choice of selecting technologies, frameworks and tools to build applications and peripherals proves to be another advantage. The value of community participation and support from a vast base of developers across geographies brings more firepower and extra leeway.

No doubt closed source projects will continue to have markets; however, they will be put under constant pressure due to the impact and penetration open source platforms are creating and are capable of advancing. Open source is very much happening and cutting across all the layers of cloud architecture. This means organizations today can't afford to ignore open source and will need to either do talent investment, experimentation or lay out a full-blown strategy when thinking about cloud as an enabler. The future may belong to businesses that take on new technology bets - try different computing and delivery approaches, experiment with constantly evolving technologies that are more open and collaborative.

Energy Department Launches Open-Source Online Training Resource To Help Students, Workers Gain Valuable Skills


The Energy Department and SRI today officially launched the National Training and Education Resource (NTER), an open-source platform for job training, workforce development and certification. NTER was envisioned by the Department and developed by SRI.

As part of the Obama Administration’s commitment to invest in skills for American workers, the Energy Department officially launched today its National Training and Education Resource (NTER), an open-source platform that brings together information technologies to support education, training and workforce development.  The program facilitates training programs across a wide range of applications – from home energy audits to science, mathematics and engineering education to manufacturing industries.  NTER is one of a number of significant steps the Department is taking to ensure U.S. workers have the training they need lead in the 21st century global economy.

Building off a beta version released last year, NTER provides public and private organizations free access to the federal resources available at www.nterlearning.org, offering an open-source, web-based interactive learning environment for developing customizable training programs and materials. This resource also allows partner organizations to develop and distribute training materials quickly and cost-effectively, reaching more individuals and saving money.

Over the past year, the Energy Department has worked with private software developers to enhance the platform’s 3D capabilities, providing highly interactive content such as visual walkthroughs or full performance-based assessments. NTER recently transitioned to the cloud, providing scalable and flexible architecture that supports innovative, efficient online engagement.

Developing a New Generation of Energy Workers

At its inception, NTER provided training and workforce development support around housing energy efficiency, offering a host of interactive lessons for today’s energy audit and weatherization experts. Since 2009, these efforts have helped the Obama Administration complete energy efficiency upgrades in more than 1 million homes nationwide.

As part of the Department’s commitment to provide Americans with the skills they need to compete in the global clean energy race, we are expanding training modules to include other clean energy sectors.  For example, the Energy Department is working with the Interstate Renewable Energy Council (IREC) to develop a program for IREC’s Solar Instructor Training Network, which provides training for building code officials who issue permits for solar energy installations on homes and businesses.

In partnership with the Edison Electric Institute and the Center for Energy Workforce Development, the Energy Department helped launch the “Troops to Energy Jobs” program to help increase opportunities for veterans in the energy sector. This initiative will leverage the NTER platform to provide transitional career training to help veterans gain the skills they need to get jobs in the energy industry.

Supporting American Manufacturing and Industrial Workers

Through NTER, the Energy Department is also collaborating with private industry to develop training and certification programs for manufacturing and industrial workers across the country.

Labor organizations, such as the National Insulation Association and the International Association of Heat and Frost Insulators and Allied Workers, are leveraging NTER to offer free, interactive modules to train workers on design, installation and maintenance of mechanical insulation. These tools can also help building architects and engineers, as well as facility owners, better understand mechanical insulation systems.

“The NTER platform has provided our International Union with an up-to-date technology to deliver educational and training materials that we once delivered via manuals and books. Today’s generation of apprentices welcome the use of this tool and technology as a teaching method, demonstrating the basics and fundamentals in the mechanical insulation industry.”

- Thomas Haun, National Training Director, International Association of Heat and Frost Insulators and Allied Workers

The Manufacturing Institute, an affiliate of the National Association of Manufacturers (NAM), is working with the Energy Department to use NTER as a cutting-edge vehicle to implement the NAM-endorsed Manufacturing Skills Certification System across the nation’s network of community colleges and high schools. The certification system provides students with opportunities to earn manufacturing credentials that are accepted across state lines, are valued by a range of employers and can improve participants’ earning power.

Providing Cost-Effective Curriculum for Students

The Energy Department is also committed to providing students at community colleges and universities around the country access to reliable, engaging, inexpensive learning tools. As part of this commitment, we’ve partnered with the Department of Labor (DOL) to support its Trade Adjustment Assistance Community College and Career Training program. Through this effort, DOL is working with community colleges and other higher education institutions to help American workers acquire the skills and credentials they need for high-wage, high-skill employment. One of these projects, led by the Illinois Green Economy Network and the College of Lake County in Grayslake, Ill., is leveraging the NTER platform and sharing interactive course materials within a consortium of over 30 community colleges.

Additionally, in Warren, Mich., Macomb Community College is using NTER to enhance several of its electric vehicle oriented courses as part of its comprehensive education programming in workforce training, professional certification and career preparation programs.

Wednesday, June 20, 2012

Actual Open Source eCommerce cart migration stats reveal market trends


Magnetic One has been a long-time service provider for open source eCommerce merchants. Their biggest product has been a desktop program that allows merchants to manage their online store on their personal computer (rather than across the web) for faster editing, easy and fast import/export, and QuickBooks integration.

Cart2Cart is the brand name of Magnetic One's service that moves an online merchant from one host to another, or upgrades a program to a newer version, or helps a merchant change from one online store program to another, called "shopping cart migration." Their services provide some fascinating insights into the ups and downs of eCommerce popularity.

By far, Cherevatyy says, the largest number of Cart2Cart shopping cart migrations (the program merchants are moving AWAY from) are from osCommerce and ZenCart, over 1/3 of their business comes from these two old sources. Their remaining migration business is a long list of eCommerce programs, with the old version of VirtueMart 1.x at the top, followed by older versions of CRE Loaded, X-Cart and Miva Merchant.

Shopping carts merchants are migrating away from

osCommerce (22%),
Zen Cart (14%)
VirtueMart 1.x (10%)
Carts merchants are migrating towards

Online merchants who use MagneticOne's Cart2Cart service overwhelmingly are migrating to Magento, with 45% of their migrations. Magento is a complex and different program, so this could indicate that Magento is very popular, and/or very difficult for merchants to complete this task by themselves.

The next most popular migration is to PrestaShop, with OpenCart being the third most popular cart to migrate to. Other popular destinations include CS-Cart, X-Cart, and Shopify.

Shopping carts merchants are migrating towards

Magento (45%),
PrestaShop (12%),
OpenCart (8%)
Cart2Cart service pricing varies depending on the number of products, customers, and orders in the store. The minimum price is US$49, and the company will give you a preview of 10 products/customers/orders before you decide to proceed.

Tilera's TILE-Gx Processor Family and the Open Source Community Deliver the World's Highest Performance per Watt to Networking, Multimedia, and the Cloud


Tilera's MDE 4.0 Software Release Provides the Latest in Open Source Libraries and Tools Making It Easy, Familiar and Efficient to Develop Software on TILE-Gx Processors

GigaOm Structure -- Tilera(R) Corporation, the leader in 64-bit manycore general purpose processors, announced the general availability of its Multicore Development Environment(TM) (MDE) 4.0 release on the TILE-Gx processor family. The release integrates a complete Linux distribution including the kernel 2.6.38, glibc 2.12, GNU tool chain, more than 3000 CentOS 6.2 packages, and the industry's most advanced manycore tools developed by Tilera in collaboration with the open source community. This release brings standards, familiarity, ease of use, quality and all the development benefits of the Linux environment and open source tools onto the TILE-Gx processor family; both the world's highest performance and highest performance per watt manycore processor in the market. Tilera's MDE 4.0 is available now.

"High quality software and standard programming are essential elements for the application development process. Developers don't have time to waste on buggy and hard to program software tools, they need an environment that works, is easy and feels natural to them," said Devesh Garg, co-founder, president and chief executive officer, Tilera. "From 60 million packets per second to 40 channels of H.264 encoding on a Linux SMP system, this release further empowers developers with the benefits of manycore processors."

Using the TILE-Gx processor family and the MDE 4.0 software release, customers have demonstrated high performance, low latency, and the highest performance per watt on many applications. These include Firewall, Intrusion Prevention, Routers, Application Delivery Controllers, Intrusion Detection, Network Monitoring, Network Packet Brokering, Application Switching for Software Defined Networking, Deep Packet Inspection, Web Caching, Storage, High Frequency Trading, Image Processing, and Video Transcoding.

The MDE provides a comprehensive runtime software stack, including Linux kernel 2.6.38, glibc 2.12, binutil, Boost, stdlib and other libraries. It also provides full support for Perl, Python, PHP, Erlang, and TBB; high-performance kernel and user space PCIe drivers; high performance low latency Ethernet drivers; and a hypervisor for hardware abstraction and virtualization. For development tools the MDE includes standard C/C++ GNU compiler v4.4 and 4.6; an Eclipse Integrated Development Environment (IDE); debugging tools such as gdb 7 and mudflap; profiling tools including gprof, oprofile, and perf_events; native and cross build environments; and graphical manycore application debugging and profiling tools.

Industry Support for Tilera

"We have adopted the TILE-Gx processors and within 6 months took our complete RouterOS software stack to production by leveraging Tilera's MDE. The development tools are easy and intuitive because they provide a standard Linux environment that all my engineers are already familiar with," said John Tully, chief executive officer, MikroTik. "Using the TILE-Gx processor family enabled us to utilize the same software and scale it across our family of routers from 1-2 Gig to the high end 36 core Cloud Core Router."

"The software and support we have seen from Tilera has been excellent," said, Matthew Knight, president, Accensus LLC. "Their tools are up there among the best on the market today. Tilera's MDE coupled with their TILE-Gx processor is the only platform we have found that provides real time performance in Linux comparable to that of a bare metal programming model. Using Tilera's Zero Overhead Linux features enables us to provide some of the most deterministic processor latencies in the industry for our High Frequency Trading customers, while deriving all the benefits of the Linux platform."

"The Tilera development environment provides a full Linux distribution with the latest open source libraries, packages and tools," said Victor Julien, lead developer, Open Information Security Foundation. "Running the Suricata software on the TILE-Gx36 processor was a simple task of recompiling. It was just like running on any other Linux machine, with the added benefit of having more cores and more performance density. I am impressed that Suricata achieves 30 Gbps in a 1U platform packing 144 cores from Tilera."

EasyCloud™ Makes Launching Web Apps In The Cloud Simple.


EasyCloud™ has combined Infrastructure, Platform and Software as a service as a combined product, with the ability to easily launch popular open source applications without any linux programming expertise. Database and all.

EasyCloud has combined Infrastructure, Platform and Software as a service as a combined product, with the ability to easily launch popular open source applications without any Linux programming expertise. Also featuring a new, very promising auto-database creation tool and other powerful API features.
It’s the "First of its kind solution available on the market designed to help customers reduce the amount of time and effort spent on configuring popular web applications in cloud environments" according to EasyCloud™ in their beta launch news release on Yahoo News, on Tuesday. When using other cloud environments, designers and developers are faced with learning new Linux commands, searching for patches and scouring through help files just to build out servers before they ever get started doing actual web development. Not only is this one of the biggest pains in the development business; it’s also the most costly to any web developer who is trying to run a business.

"We ran a few EasyCloud deployments to see for ourselves and before we could even set a baseline for testing, our Wordpress server was ready to use. Needless to say we were impressed, so much so we had to ask ourselves why no one had thought of this, until now." says Brandon Starr, Founder of Starr Creative, a web firm in Houston, TX. EasyCloud is promised to save developers hours of time, freeing them to develop better websites.

"I figured if we could get web developers back to developing the web, rather than learning how to launch servers, the great minds of the web could get back to what they do best, creating. Instead of focusing on solving silly server issues. EasyCloud allows the developer to get his server online in the cloud, instead of getting frustrated trying to make sense of whats out there and how to make it work." says Michael Miller, CFO of EasyCloud.

EasyCloud was founded by the creators of new web hosting startup Clout Host, who plan on continuing the advancement of what they call IPSaaS technology (Infrastructure, Platform and Software as a Service). From the beginning they made it clear that they wanted to bring something new to the hosting business. The solution was discovered after several admins on the network team complained of wasted time on building and patching servers.

"The team went a bit stir crazy putting together something to finally resolve the issue. And after using our in-house solution for a few months, we knew this product was a must-have for any web designer or developer. Really this is a solution that finally brings the cloud to the masses." says EasyCloud™ CEO Jared Rice. EasyCloud.

Eucalyptus Moves to GitHub


Eucalyptus Systems has updated its IaaS (infrastructure as a platform) cloud computing software so that computing jobs may be started more easily.

The open source Eucalyptus 3.1, to be released June 27, will also be the first version to be available on the GitHub online repository of open source software, providing a central place for developers to work on the code.

Developed by a University of California researcher, Eucalyptus is a cloud software platform that reproduces the Amazon Web Services API (application programming interface). With this software, organizations could duplicate AWS internally, allowing them to move jobs easily between Amazon and an in-house system. In March, Amazon announced that it would support development of Eucalyptus.

This release builds on version 3.0, released last August. It's the first to come with FastStart, a provisioning service that allows an administrator to get a cloud computing job running within 20 minutes, the company claims. Virtual machines are kept in a library so they can be easily deployed. The software also includes a suite of installation tools, called SilverEye, for advance installation and configuration of the virtual images. FastStart works on CentOS 5 Linux distribution with the Xen hypervisor, or CentOS 6 with Kernel Virtual Machine (KVM) hypervisor.

This version of the software has been updated to work with the latest versions of other platforms as well. It can now run on the latest version of with Red Hat Enterprise Linux (RHEL), allowing users to deploy either Xen or KVM-based virtual machines on RHEL servers. Users can run Elastic Cloud Compute (EC2), Elastic Block Storage (EBS), Simple Storage Service (S3) and Identity and Access Management (IAM) services on RHEL. Eucalyptus 3.1 has also been engineered to work with VMware's virtualization management software, vCenter version 5.

The move to GitHub should bring more community participation in the further development of the software, the company predicts. Eucalyptus Systems sells an enterprise version of the software with additional proprietary management features, though the company also shepherds the development of the open-source core of the program, called Eucalyptus Community Cloud. Placing the codebase on GitHub will centralize the development activity in one public location, as well as allow Eucalyptus users to file requests for new features and bug fixes, and watch their progress through the development cycle. Defects and new features will be tracked with a copy of the Jira project tracker.

Monday, June 18, 2012

VMware launches open source toolkit to run Hadoop on virtual machines


VMware is ramping up its big data push with Serengeti, a new open source toolkit that lets enterprises run Apache Hadoop on virtual machines.

In a statement Thursday, the virtualisation juggernaut said the toolkit will allow enterprises to deploy a Hadoop cluster in minutes on VMware’s vSphere virtualisation platform, plus common Hadoop components such as Apache Pig and Apache Hive.

VMware is also working with the Apache Hadoop community to contribute extensions that will make key components “virtualisation-aware” to support elastic scaling and improve Hadoop’s performance in virtual environments.

Apache Hadoop is an open source platform commonly used by large enterprises in the growing area of big data processing, where complex data sets are broken down into smaller chunks for analysis by clusters of computers to derive key business insights. It is based on MapReduce, a programming model conceived by Google to overcome the problem of creating web search indexes.

According to VMware, deployment and operational complexity, the need for dedicated hardware, and concerns about security and service level assurance have prevented many enterprises from taking advantage of Hadoop.

“By decoupling Apache Hadoop nodes from the underlying physical infrastructure, VMware can bring the benefits of cloud infrastructure – rapid deployment, high-availability, optimal resource utilization, elasticity, and secure multi-tenancy – to Hadoop,” it said.

Tony Baer, principal analyst at technology consultancy Ovum said: “Hadoop must become friendly with the technologies and practices of enterprise IT if it is to become a first-class citizen within enterprise IT infrastructure. The resource-intensive nature of large Big Data clusters make virtualisation an important piece that Hadoop must accommodate”.

“VMware’s involvement with the Apache Hadoop project and its new Serengeti Apache project are critical moves that could provide enterprises the flexibility that they will need when it comes to prototyping and deploying Hadoop,” Baer added.

Earlier this month, VMware partnered with HortonWorks to develop a high availability architecture that allows companies to run HortonWorks’ Hadoop clusters on vSphere. In April, it also acquired big data start-up Cetas that provides analytics applications on top of Hadoop.

Friday, June 15, 2012

Open Source PHP and Ruby on Rails Updated for Security


Busy week of patching continues as programming languages and frameworks get patched for security vulnerabilities.

This week has been a particularly busy one for IT professionals with Java and Microsoft updates. While those updates were mostly client side patches, server administrators aren't off the hook. The Ruby on Rails framework and PHP language both issued security updates this week addressing multiple vulnerabilities.

PHP 5.4.4 and PHP 5.3.14
PHP is a widely deployed open source language on web servers. According to a recent survey by w3techs, PHP is used by 78 percent of known websites, including major Internet properties like Facebook, Wikipedia and Wordpress.com

The two security flaws fixed in PHP 5.4.4 and PHP 5.3.14 are related to each other and could potentially enable an attacker to execute arbitrary code. The primary flaw, identified as CVE-2012-2143 is a security issue with the DES (Data Encryption Standard)implementation found within the PHP "crypt()" function.

A Red Hat bugzilla report on the flaw by developer Jan Lieskovsky, notes that the flaw was found in the way DES and extended DES based crypt() password encryption function performed encryption of certain keys. The flaw is that certain keys were truncated before being DES digested, which could potentially have enabled an authentication bypass.

The second flaw identified as CVE-2012-2386, is a vulnerability within the PHP phar extension. Phar enables entire PHP applications to be placed into a PHP Archive (phar) file.

"The vulnerability is caused due to an integer overflow error within the phar extension in the "phar_parse_tarfile()" function (ext/phar/tar.c) and can be exploited to cause a heap-based buffer overflow via a specially crafted TAR file," Security firm Secunia stated in its advisory.

Secunia warned that successful exploitation of the Phar vulnerability may allow execution of arbitrary code.

Ruby on Rails
Ruby on Rails (Rails) is a popular open source web framework, that powers many popular sites, including Github. Githubwas exposed as being at risk in March due to Rails vulnerability that has since been patched.

Rails 3.2.6 is now being patched for a pair of new vulnerabilities that could leave users at risk. CVE-2012-2694 details a Ruby on Rails Unsafe Query Generation Risk in Ruby on Rails risk while CVE-2012-2695 defines a Ruby on Rails SQL Injection vulnerability.

"Input passed to the Active Record interface via nested query parameters is not properly sanitised before being used in SQL queries," Secunia wrote in its advisory. "This can be exploited to manipulate SQL queries by injecting arbitrary SQL code."

Naace calls for 'open source' approach to ICT


ICT association Naace believes the overhaul of ICT lessons in English schools should be treated as an opportunity to develop an "open source" approach to teaching the subject.

In comments reported by Computing.co.uk, the body's chair Miles Berry said a "wiki curriculum" could be adapted and enhanced by various participants as a collaborative project to ensure the best possible teaching framework.

He suggested the approach used by existing open source software companies should be applied to "see what we can learn and apply to the curriculum".

Mr Berry, who was speaking at the Westminster Education Forum, added: "People can bring whatever they want into it, into this big, wide-open space and choose from that what they want to us."

His comments come after the Department for Education confirmed that the current approach to teaching ICT will be scrapped from September this year to make way for a new curriculum, which is expected to place greater emphasis on computer science.

In response to the announcement, Naace claimed it is currently "an exciting time for ICT in schools".

Saturday, June 9, 2012

Single application blueprint self-service open hybrid cloud Nirvana


With its fingers in the OpenStack open source cloud platform pie and its personal interest in advancing its own (still OpenStack-compliant) cloud management platform, Red Hat has this week announced the general availability of Red Hat CloudForms calling it an "open hybrid cloud management platform" to act as an Infrastructure-as-a-Service (IaaS) layer offering.

Originally slated to be a standard cloud IaaS, CloudForms now "open hybrid" status aims to go some way to addressing the "issue of the moment" that may be holding back some cloud computing deployments i.e. who controls what?

The cloud challenge

As 451 Research analyst Rachel Chalmers put it, the IT department still needs to centralise "deployment, management and integration" when it comes to cloud-based compute resources.

"A platform like CloudForms makes it possible for organisations to build clouds that span their in-place infrastructure and expose it to a new generation of developers and end users, all without relinquishing control," said Chalmers.

Red Hat chief technology officer Brian Stevens has positioned CloudForms as a "comprehensive platform" in its own right, but one that is open to "encompassing" a wide variety of infrastructures, of many types, and from many vendors, but with no single vendor lock in.

"With CloudForms, enterprises can build and manage an open enterprise hybrid cloud, providing infrastructure choice spanning across multiple virtualisation platforms and extending to public cloud resources, and can build and manage applications in their cloud, enabling enterprises to use their cloud for their workloads," said the company, in a press statement.

Self-service, no waiting

Red Hat puts a lot of effort into talking up the so called "self-service capabilities" of its cloud offering -- and this is a term that you are going to hear more and more when company's try to describe how their cloud offering satisfies IT administrators' needs for control and governance.

We'll also now most likely hear about the kind of wider interoperability that Red Hat is trying to fuel interest in. So look for the companies talking about a single application blueprint across multiple virtualisation technologies.

"We recognise that for a complete open hybrid cloud, enterprises need more than just the management layer -- they also need openness and portability across compute, data/services, programming models and applications," said Red Hat's Stevens.

GStreamer SDK for Multimedia App Development released by Fluendo and Collabora


Free pre-built version of GStreamer enables the building of complex-multimedia applications across platforms and hardware

Collabora Ltd. and Fluendo S.A., the leading companies in open source multimedia, have released a free and open source cross-platform software development kit (SDK) for the GStreamer multimedia programming framework.

Designed for programmers developing multimedia applications such as video editors, streaming media broadcasters and media players, the new SDK works on Linux, Windows and Mac OS X, and is available for free at http://www.gstreamer.com . Several Fortune 500 OEMs have used GStreamer for multimedia development.

The GStreamer multimedia development framework allows an application to handle a wide range of media formats and sources, and has been developed and matured in the open source ecosystem for more than 10 years. The SDK broadens GStreamer availability, offering programmers an efficient way to develop full-featured multimedia applications across leading desktop platforms.

The SDK builds on the proven technology in GStreamer and provides a consistent, well tested and high quality multimedia sub-system for developers. Developers working with the SDK will find it to be functionally identical on Windows XP, Windows Vista, Windows 7, Mac OS X (version 10.5 or later on Intel) and all supported Linux platforms. This portability enables faster development and time to market.

GStreamer.com offers access to development resources The release of the SDK coincides with the launch of www.gstreamer.com , a new initiative to facilitate the commercial adoption of the GStreamer project. Users have free access to the SDK, extensive documentation, tutorials and instructions for installing GStreamer and getting started.

Collabora and Fluendo will provide a range of products and services to augment the SDK. Fluendo provides a high quality suite of commercial audio and video codecs. Collabora brings a range of consultancy services. Other companies with offerings around GStreamer are invited to join this initiative.

"Fluendo is very enthusiastic about this SDK. We consider it to be a vital step toward broadening the use of GStreamer as the versatile and powerful multimedia framework it is. With this SDK, building multimedia-rich applications will be much simpler, giving developers the opportunity of spanning several platforms and operating systems in one single move", said Julien Moutte, co-founder and CTO of Fluendo. "Once again, open source solutions prove to be a solid foundation for the development of new technologies both free and commercial", he concluded.

"Collabora's work with GStreamer in open source and commercial product contexts for the past six years has shown it's unparalleled flexibility for building powerful multimedia applications and devices. We're very glad to work together with Fluendo to augment this adoption and deliver GStreamer to a wider audience", said Robert McQueen, co-founder and CTO of Collabora. "We look forward to continuing this initiative and joining forces with developers worldwide to bring innovative products to market."

GStreamer is widely used in server, desktop, mobile, automotive (IVI) and set-top box (STB) environments. An upcoming release of the SDK will include support for Google Android and other mobile platforms.

Wednesday, June 6, 2012

Python & PyLadies


It takes a lot of data to build the investigative and multimedia projects we deliver here at the Workshop. So naturally, we want to use the best possible methods to convey what we uncover.

That’s why we sent three summer staffers — Lydia Beyoud, Hilary Niles and Samantha Sunne — to a training session we helped coordinate last weekend: an introduction to the open-source programming language Python, through a training program geared specifically toward women.

The event, sponsored by DC PyLadies and DC Python, drew more than 25 aspiring programmers, with dozens more on the waiting list. The range of professional fields represented by attendees attests to the broad applications of code: journalism, social work, library science, archive management, gaming and public policy.

Knowledge is power, as the saying goes. And when today’s data sets and streams burst with millions of records that refresh in an instant, our job of turning that information into knowledge requires advanced new tools. This is why a language like Python is essential.

At the Workshop, our staff learned the basic vocabulary of Python and how it can be used to collect massive amounts of data from applications such as Twitter. With a few keystrokes, we were able to chart trending topics, the most recent tweets for a given user or keyword, and more. While some of these functions are easy to perform through Twitter itself, the ability to “scrape” the application allows journalists to analyze and report using massive amounts of raw data.

In addition to reporting advantages like these, there’s a deeper significance to the nature of last weekend’s training: As the name PyLadies suggests, it has to do with gender.

The Investigative Reporting Workshop’s office this summer is comprised of about 70 percent women. In most newsrooms, women make up 37 to 40 percent of the staff, according to a recent study citing both Bureau of Labor Statisics and an American Society of News Editors census.

In the tech world, the representation of women is only recently — and slowly — climbing back to up its peak of 37 percent in 1984. Part of what underlies this gender gap is a cultural divide. In a May 2012 article, Doug Gross of CNN described the environment at some tech companies as “something akin to your worst stereotype of a booze-soaked frat party.”

PyLadies, an international mentorship group, offers an antidote to such “brogrammer” (think “bro” + “programmer”) culture — a shift some women and, no doubt, some men would appreciate. We’re happy to have helped them find a venue at American University.

The organizers of the event, Jackie Kazil and Katie Cunningham, have plans to grow the DC group to reach a larger audience. With some luck, events like this will begin to grow organically nationwide.

Tuesday, June 5, 2012

Official CDISC Workshop hosted by Clinovo on June 11th,2012 in Palo Alto


The CDISC Education Team Announces Clinovo Free CDISC Training on Legacy Clinical Data Conversion to SDTM

Sunnyvale, CA (PRWEB) June 05, 2012

The Clinical Data Interchange Standards Consortium (CDISC) Education Team announces that Clinovo is offering a Free CDISC workshop around legacy data conversion to SDTM. The training will take place on June 11th, 2012, in Palo Alto.

"We are delighted that the CDISC organization contacted us to lead this training", says Ale Gicqueau, President and CEO at Clinovo. "Clinovo has been actively promoting the benefits of CDISC standards at our free events, webinars, and through the free distribution of our open-source CDISC mapping tool. Being asked to host this workshop by the Clinical Data Interchange Standards Consortium (CDISC) Education Team is a great honor for us".

Clinovo will be hosting a vendor-neutral presentation to discuss common issues encountered during legacy clinical data conversion to CDISC SDTM standard, and methods for addressing these issues. The CDISC training will include demonstrations using commonly available tools.

CDISC Workshop Details:
Date: Monday, June 11, 2012
Time: 5:00-7:00 PM Pacific Time
Location: SNR Denton, 1530 Page Mill Road, Suite 200, Palo Alto, CA 94304-1125
Registration by e-mail to training(at)cdisc(dot)org.

Read Training Abstract

Clinovo has been active in the implementation of CDISC standards since 2007. As a CDISC Gold Member, Clinovo has continuously worked on raising awareness, education and expertise on CDISC standards throughout the industry.

Clinovo was recognized in April 2012 with the official status of CDISC Registered Solutions Provider for its valuable CDISC service offer to pharmaceutical companies. Clinovo is a registered official subject matter expert in the following CDISC standards: Clinical Data Acquisition Standards Harmonization (CDASH), Laboratory Data Model (LAB), Study Data Tabulation Model (SDTM), Analysis Data Model (ADaM), Define.xml, and Terminology.

Since April 2011, Clinovo has freely distributed CDISC Express, the only open source SAS®-based system that automatically converts clinical data into CDISC SDTM, which has been downloaded by almost 600 unique users.

Internal C++ libraries launched by Facebook as Open Source


Facebook is liberating a large collection of libraries that it uses internally for C++ development. The code is available from a public GitHub repository where it is distributed as open source under the permissive Apache Software License.

The assortment of frameworks is collectively called Folly, the Facebook Open Source Library. Its individual components support a diverse spectrum of capabilities, ranging from general-purpose programming functionality to more specialized pieces that are designed to help developers wring extra performance out of complex applications.

Among many other things, the Folly libraries simplify concurrency, string formatting, JSON manipulation, benchmarking, and iterating over collections. They also offer optimized drop-in replacements for several C++ standard library classes, including std::string.

As I learned when I visited Facebook’s headquarters earlier this year, open source software is an important part of Facebook’s infrastructure and development culture. The company contributes to a number of major projects such as Hadoop and memcached. It has also released some key pieces of its internal software stack, such as the Cassandra database server and Thrift RPC framework.

When Facebook wants to open a piece of software that has developed internally, the company must first isolate the component so that it can be used by third parties without depending on other proprietary Facebook code. The challenge of disentangling pieces of infrastructure for standalone consumption is an obstacle that hinders Facebook’s efforts to open more of its stack.

Releasing the company’s internal C++ libraries will make it easier to share additional software that depends on this code. Although the desire to get some critical Facebook dependencies out in the open is the primary motivation, the Folly code itself is also likely going to be useful for a number of C++ developers.

Sunday, June 3, 2012

The R Programming Language


Take the open road to statistical analysis

Statistical analysis has been around since mainframes were introduced to academia and corporations back in the 1960s.

But the great diversity of telemetry collected by systems today, the need to sift through it for insight and the growing popularity of open-source alternatives is transforming the R programming language for statistical analysis and visualisation. Its new nickname is Red Hat for stats.

Everybody loves R, particularly those selling big-data products such as data warehouses and Hadoop data munchers.

Part of the reason is that R is an open source package that solicits input from a large and clever community of statisticians and quantitative analysts who are able to steer its development.

Alphabet soup
This was not the case for proprietary tools created by SAS Institute and SPSS at the dawn of the mainframe era, and their follow-ons in the distributed computing era.

Just as Linux can be thought of as an open-source analog to Unix, the R programming language borrows heavily from the S language.

This was created by John Chambers at Bell Labs in 1976, as a reaction to the pricey but well respected SPSS and SAS tools that came out nearly a decade earlier.

S is very much a child of the VAX and Unix minicomputer era, while R is a product of the PC and Linux era.

The R language was created in 1996 by Ross Ihaka and Robert Gentleman, two stats professors from the University of Auckland in New Zealand who are still core members of the R development team. (Incidentally, so is Chambers, the creator of S, and it is no accident that some data crunching routines for S will run unchanged in the R environment.)

R can be thought of as a modern implementation of S. So can S-PLUS, created by a company called Insightful, which licensed S from Lucent Technologies in 2004 and was eaten by Tibco Software in 2008.

Come the revolution
Unlike S and to a certain extent S-PLUS, R is not just some code created in an ivory tower.

It is the product of a community of statisticians and coders which has created more than 2,500 plug-ins for chewing on various data sets and doing statistical analysis tuned specifically for particular data types or industries.

R is used by more than two million quantitative analysts worldwide, according to estimates made by Revolution Analytics, which was founded in 2007 to create a parallel implementation of R.

Since then, the company has taken an open-core approach to R, offering commercial support for the open-source package, while at the same time extending the R environment to run better on clusters of machines and in conjunction with Hadoop clusters.

To date, no one has commercialised the PSPP open-source alternative to SPSS (acquired by IBM in July 2009), but it would not be surprising to see this happen at some point, if PSSP matures.

Revolution Analytics has not exactly made the R community happy by peddling proprietary extensions to R in its R Enterprise distribution, after getting some seed money from Intel Capital in 2008 and $9m in venture money in 2009.

Since then, Revolution Analytics has parallelised the underlying R statistical engine so it runs better on multicore/multithreaded processors and across server clusters; added a NoSQL-like format called XDF to help parallelise data sets; and added support for native SAS file formats and conversion to XDF

Most recently it has tweaked its R implementation so each node in a Hadoop cluster can run R analytics locally on the Hadoop cluster on data stored in the Hadoop Distributed File System and then aggregate the results of those calculations, much like MapReduce operations on unstructured data.

Revolution Analytics has soaked up a lot of the oxygen in the R room for the past few years. But other companies are doing interesting things, integrating R tools with their own products and making life easier for analysts seeking answers in mountains of data.

Parallel universe
Seeking some kind of advantage over its rivals in the data warehousing space, Netezza opened up the Netezza software stack in February 2010.

Netezza is a maker of data warehousing appliances based on a heavily customised and parallelised version of the PostgreSQL database, which uses field programmable gate arrays (FPGAs) to boost its performance running on x86 clusters.

Netezza opened up its software development environment with a set of APIs that allow SAS and R algorithms to run in parallel on its warehouse appliances.

It also similarly offered hooks for Java, C++, Fortran, or Python applications to reach into the data warehouse and use the FPGAs to extract data stored in the warehouse rather than using the SQL database query language.

Seven months later, as it became clearer that big data was going to be big business, IBM snapped up privately held Netezza for a cool $1.7bn.

In October 2010, data warehouse maker Teradata added its own in-database analytics to its eponymous data warehouses with a package called TeradataR.

This turns the Teradata Warehouse Miner tool into a plug-in for the R console, allowing for 44 different analytical functions in Teradata databases, as well as any stored procedures in data warehouses to be exposed to R and called from R programs. There are another 20 functions that let R work in the Teradata environment.

The idea is to stay within the R console and run the analytics in parallel on the database, instead of trying to suck information down into a workstation and running R locally.

Oracle joins in
Even Oracle is getting in on the R act. In February, the company launched Advanced Analytics, a bridge between Oracle databases and the R analytical engine.

Advanced Analytics is Oracle's Data Mining add-on for its 11g R2 database. When R programmers want to run a statistical routine, they call the equivalent SQL function in the Data Mining toolbox and run that against the database.

If there is no such SQL function, then an embedded R engine spread across database nodes (if it is a cluster) runs the R routines, collects up summary data and presents it back to the R console as an answer.

Oracle also ships something called the R Connector for Hadoop for its Big Data Appliance, a version of the Cloudera CDH3 Hadoop environment running on Oracle's Exa x86 cluster iron.

This connector lets an R console talk to Hadoop Distributed File System and NoSQL databases running on the Big Data Appliance.

Pros and Cons of Open Source Medical Design


Medical device design is heavily regulated for obvious safety reasons. But a number of researchers–including those with support from the Food and Drug Administration–are developing “open-source” healthcare equipment. The idea is to offer completely transparent, shared software code and mix-and-match interface and hardware designs. While this might seem risky, the goal is to spark faster and more effective innovation in the medical device field, while making it easier to spot potential programming bugs and other device failures.

A study from the University of Patras in Greece found that one in three such devices sold in the U.S. were recalled between 1999-2005. The FDA found that drug-infusion pumps were linked to 20,000 serious injuries and more than 700 fatalities between 2005-2009. It can be hard to expose specific problems with these products, given that medical software (and hardware) is proprietary and patent-protected, thus veiled in secrecy. The open-source approach could, in theory, make it easier to fix, or even avoid, dangerous flaws before they hurt or kill hundreds or thousands of patients.

Some of the open-source devices currently being designed are so far intended only for use in medical research. They are usually sized for animals or specified for use on cadavers. In other words, the doctors, engineers, and designers working on such equipment aren’t yet addressing U.S. regulation in the medical device arena, but their research could lead to more effective products down the line.

The overview offers a helpful round-up that clearly illustrates a trend–and a thought-provoking one at that.

The Generic Infusion Pump project, which is a collaboration between the University of Pennsylvania and the FDA, is designing a drug-delivery system backwards: researchers started by figuring out potential failures, then will work to avoid or mitigate them in the design.

The Open Source Medical Device initiative at the University of Wisconsin-Madison is working toward a high-resolution medical body scanner combined with radiotherapy machine. The initiative will offer all of the instructions and source code for building one, for free–along with recommendations on how much parts should cost. Researchers say the device should be one-fourth the cost of a commercial body scanner, and might be a good option for resource-constrained communities that otherwise may not be able to have access to such equipment.

The Raven surgical robot is an open source system (pictured at top) that was designed at the University of Washington in Seattle. Researchers use it to try out new processes in robotic surgery. (See Janet Fang’s SmartPlanet summary of what Raven can do.)

The Medical Device Plug-and-Play Interoperability Program is a $10 million initiative (funded by the National Institutes of Health with FDA support) hopes to enforce open standards for devices from various companies to work together.

The Medical Device Coordination Framework that’s being developed at Kansas State University aims to create an open-source hardware platform that would include interchangeable buttons, displays, as well as software that would connect them with sensors and other devices. Inventors could design health equipment from these mix-and-match items.

Friday, June 1, 2012

JavaScript founder dismisses Google Native Client

Brendan Eich
Questioning Google's Native Client development efforts, JavaScript founder Brendan Eich argued on Wednesday that JavaScript is sufficient for the needs Google is trying to fill.

Speaking at the O'Reilly Fluent Conference in San Francisco, Eich dismissed Google's technology and also promoted the upcoming upgrade to the official JavaScript specification, ECMAScript 6. With Native Client, Google is offering an open source technology to run portable native code securely in a browser.

But Eich doubted whether Native Client would get support from browser vendors Apple, Microsoft, or Mozilla, and he touted JavaScript as sufficient.

JavaScript is accessible and offers benefits like memory safety, said Eich, who is CTO at Mozilla. "Java can sandbox, too. We don't need Native Client," Eich said. He also cited the Low Level JavaScript project, which offers a C-like type system with manual memory management and memory safety, as negating the need for Native Client. Low Level JavaScript compiles to JavaScript.



ECMAScript 6, meanwhile, is intended to be better for applications, libraries, and code generators, according to Eich. "ECMAScript 6 is under way," and parts of it are already showing up in the Chrome browser and Mozilla's Spider Monkey JavaScript engine, he said. "We don't want to change the language too much. I'm sensitive to people who think we're going to change it into Java or something. We're not doing that."

Specific improvements eyed for version 6 include string interpolation, the use of default values instead of undefined values, indexing of objects via another object, and elimination of the arguments object. For libraries, better modularity as well as proxies, for meta-programming, are anticipated. Eich also touted code generator capabilities, saying, "I think we're finally ready for it." Developers of JavaScript, he said, also want to make it a better compiler target language.

Also under consideration for inclusion in JavaScript at some point is parallel JavaScript, for data and task parallelism; this is still a research project, Eich said, noting that JavaScript is now 17 years old. "The cool thing is people are using it in ways I couldn't foresee," he said.

A JavaScript developer in attendance lauded a planned "let" keyword due in the JavaScript standard. Let, said Steven Olson, software architect for the Church of Latter Day Saints, enables developers to declare a global variable that stays in a namespace and cannot be copied. "The benefit is you don't have the confusion between global and local namespace in your program."

Also at the conference, 4D announced Beta 2 of Wakanda, which is intended to be a turnkey JavaScript development platform featuring an IDE, client framework, NoSQL database, and server capabilities. A production release of Wakanda is due in June, with prices starting at $35 for a single developer using the vendor-supported commercial edition. A free, community edition also will be available. "[Wakanda] allows the developer to really create applications for the Web and mobile applications really fast," without having to deal with integrating different software development components, Michel Gerin, chief marketing officer at 4D. "It's all working together really nicely."