Thursday, December 27, 2012

Game Blocks offers free, open-source game creation for novices

Sheldon Pacotti, writer of the original Deus Ex games and indie developer in his own right, created Game Blocks, an open-source library for making games, for the students in his video game writing course at the University of Texas. Game Blocks is designed to help novice developers craft their stories, animations and physics effects with a simple, snap-to interface, as demonstrated above.


Game Blocks is able to compile platformers, adventure games, simulation games and arcade shooters for PC and Mac, and makes it easy to organize dialogue and story. Best of all, it's completely free. Anyone interested in messing around with game design or interactive storytelling, download Game Blocks directly from Pacotti's New Life Interactive.

Source: joystiq

SAP achieves Java Enterprise Edition 6 Web Profile Compatibility

SAP AG has achieved Java Enterprise Edition, or EE, 6 Web Profile Compatibility for SAP NetWeaver Cloud, a Java-based platform-as-a-service, as part of the SAP HANA Cloud platform.



SAP NetWeaver Cloud enables customers to extend existing SAP systems with new cloud-based applications - developed or provided by customers, SAP partners or SAP.

Compatibility with Java EE 6 Web Profile will enable SAP customers and partners developing applications on SAP NetWeaver Cloud to measurably speed up their development and delivery time.

Java EE 6 is the first version of the Java Platform, Enterprise Edition, to define a focused Web Profile subset on which vendors can certify. This subset includes the major technologies of the full specification, is used for developing enterprise Web applications and simplifies the Enterprise Java programming model.

"We developed this technology together with the open source community in the Eclipse Virgo project," said Bjoern Goerke, executive vice president, Technology & Innovation Platform Core, SAP.

"This achievement is a result of SAP's ongoing engagement in open source communities and our commitment to open standards. Our strategy is to support and enable new technologies - first in the cloud - and then make them available to our on-premise customers."

Source: equities.com

Tuesday, December 11, 2012

Ubuntu Increases Reach with Language Translations


English may be the uncontested lingua franca of most development communities in our (post-?) Pax Americana age. But for developers who prefer working in other languages, the Ubuntu world has taken a big step toward making it easier to contribute without understanding English. That’s a particularly smart move for an open source project such as Ubuntu. Here’s why.



As Ubuntu developer Daniel Holbach discussed recently, translating documentation on Ubuntu development into languages other than English has long been a goal of the project. That vision has finally become reality with the release of the first non-English version of the Ubuntu Packaging Guide documentation, which explains how to make software contributions to Ubuntu.

For now, the only complete translation available is Spanish. But the story is bigger than that, because this sets a precedent for offering development documentation in many other languages via a new system that will make it easier to translate the guide, and keep translations up-to-date as information changes.
Beyond the Spanish version, progress has also been made for translating the Packaging Guide into several other languages. Of these, the version nearest completion, interestingly enough, is Russian. Brazilian Portuguese follows not too far behind.

Translation and Open Source

In many senses, open source software has long been more friendly toward the non-anglophone world than its proprietary alternatives. Because the open source model makes it easy for anyone to translate applications into the language of his or her choice, users are not restricted to the language versions made available by developers themselves. It’s no surprise that Ubuntu supports many more languages than Windows.
And if open source products are appealing to non-English speaking users for this reason, they also theoretically enjoy a leg up with programmers who prefer to work in a different language. Ubuntu developers are thus doing the smart thing by acknowledging that not everyone who stands to make technical contributions to the operating system works primarily in English. Addressing this need helps to strengthen the Ubuntu community while also ensuring that as many programmers as possible are able to volunteer their expertise to advance Ubuntu development. In a channel where voluntary labor is so important, removing linguistic barriers is crucial.

Of course, although I don’t have any statistics, I highly doubt there are legions of skilled developers out there who have previously not considered contributing to Ubuntu purely as a result of language issues. Most educated programmers can likely read and write English well enough to participate if they choose–after all, since most programming languages are filled with English words, it would be pretty difficult to become an excellent developer without learning some English along the way.

Still, the efforts that Holbach and his team have undertaken to assist developers whose first language is not English sends a positive message about Ubuntu’s openness toward participants of all backgrounds. And they just may draw in some valuable contributions from programmers who would otherwise not go to the trouble of wading through English-only documentation.

Source: The Var Guy

Tuesday, November 20, 2012

3Scale Launches Open Source API Proxy Providing Enterprises On-premises and in the Cloud API Traffic Management


3Scale, a leading Plug and Play SaaS API Management platform and services provider, has just announced the launch of a new Open Source API Proxy that provides Enterprises API traffic management on-premises and in the cloud.



3scale’s Open Source API Proxy is built on the NGINX Web Server, a popular, open-source, HTTP server and reverse proxy that currently powers a number of well known sites including Eventbrite, Facebook, GitHub, Heroku, Pinterest, TechCrunch, and WordPress.com.



The Open Source API Proxy when used in conjunction with the out-of-the box API Management solution 3scale provides, makes it possible for API providers to easily open and manage APIs without the need for programming skills and can take “less than 5 minutes” to get started. Additional benefits are described in the press release as follows:

Easily (and securely) open and manage APIs.
Launch APIs with the fastest time-to-market.
Keep control on their API architecture and.
Use proven technologies in the most demanding production environments.
The Open Source Proxy product site provides additional information about what is included with the 3scale/NGINX setup as well as detailed documentation.

The documentation explains how to setup the integration using “proxy mode.” Using proxy mode, integration with 3scale’s management platform can be done without having to make any modifications to the API source code or the need to re-deploy the API.

Source: Programmable Web

New Book Teaches Kids Open Source Programming


You know your programming language is a hit when it becomes the subject of a children’s book — or, at least, a book written for kids. Python, the popular open source programming platform, can now claim that title, with the recent release by No Starch Press of Python for Kids: A Playful Introduction to Programming. Will the book assure your kid’s success as the next prodigy of the computer world?



There’s no shortage of books and other guides — such as the extensive documentation and tutorials on the Python website itself — about Python, which enjoys enormous popularity among programmers, especially in the open source world. As a flexible, extensible language that also encourages users through its very design to follow good programming practices, it deserves that attention.

Book titles intended to introduce children to programming, however, are rarer. There are a few examples out there, but by and large, the market for published programming guides has yet to converge with children’s literature.
In a sign of change, however, No Starch Press — whose products are distributed in the United States by O’Reilly, a huge name in technology and science publishing — recently introduced a guide to Python for kids written by Jason R. Briggs. This is the first children’s title from Briggs, a developer who lives in either England or New Zealand, depending on which source you consult.
According to the publisher, Python for Kids tailors to a young audience with examples that “feature ravenous monsters, secret agents and thieving ravens,” as well as “wacky, colorful art by Miran Lipovača.” Through this medium, the text communicates the fundamentals of working in Python, including dealing with data structures, using functions and modules, handling control structures and more.
To me, learning to program from a book feels ironically old fashioned in the age of the Internet. It’s kind of like using a horse and buggy to tow your car. But for those readers who feel more at home with ink on pages than pixels, Briggs’s book should fit the bill. (For now, the title is available only in print, not digitally.)
What’s more, Briggs and No Starch seem to be latching on to something pretty new. Unlike IT publishing in general, programming literature for kids is a nearly untapped market with plenty of potential consumers. It could be a productive new frontier in IT education, especially in an era when every parent wants her kid to follow in the intellectual paths of people like Bill Gates and Richard Stallman. (Whether one should encourage emulation of the personal choices of such figures, of course, is a separate issue.)

Source: The Var Guy

Saturday, November 10, 2012

VMware releases micro version of Cloud Foundry PaaS


Everything in the cloud seems to be getting bigger or smaller. VMware today went the small route, releasing an updated micro version of the company's popular open source platform as a service (PaaS), Cloud Foundry.



In that aspect, VMware's micro instance of Cloud Foundry seems like a natural move. VMware launched it in 2011 and today the company updated it. As a PaaS, Cloud Foundry is used by developers as a cloud-based tool for creating and deploying applications. Traditionally these PaaS deployments live on large cloud environments made up of multiple virtual machines. But a micro instance, like the one released by VMware today, gives another tool for a developer to more easily test and play around with Cloud Foundry on a single machine.
The sixth paragraph now reads:
VMware says Micro Cloud Foundry has all the same features and functionality of the regular Cloud Foundry, the only limitation will be the power of the single VM that it runs on. In addition to announcing the micro version today, VMware also announced new features that will come with the Micro Cloud Foundry release. These include support for standalone apps, and enhanced support for various programming languages, including Ruby, Java and Node.js.

Source: ComputerWorld

Microsoft Open Technologies announced that it is open-sourcing Reactive Extensions


Microsoft Open Technologies announced that it is open-sourcing Reactive Extensions, an asynchronous programming model for the cloud.



Microsoft has open-sourced an asynchronous programming model known as Reactive Extensions or Rx.

According to a post on the company’s Interoperability@Microsoft, Microsoft Open Technologies (MS Open Tech) is open-sourcing Rx,  a programming model that enables developers to glue together asynchronous data streams.

Microsoft said the model is particularly useful in cloud programming because it creates a common interface for writing applications stemming from diverse data sources, such as stock quotes, tweets, computer events and Web service requests, according to the post written jointly by Microsoft software architect Erik Meijer and Claudio Caldato, principal program manager for Microsoft Open Tech.

Meijer, a proven researcher and software wizard with several Microsoft inventions under his belt, developed Rx and continues his leadership role in the evolution of the technology. The Rx development team will be on assignment with the MS Open Tech Hub, an engineering program to accelerate the open development of the project and collaborate with open-source communities.

The Rx source code will be hosted on Microsoft’s CodePlex open-source project hosting site to increase the community of developers seeking a more consistent interface to program against that works across several development languages and is also open to community contribution. The goal of open-sourcing Rx is to expand the number of frameworks and applications that use Rx in order to achieve better interoperability across devices and the cloud.

“There are applications that you probably touch every day that are using Rx under the hood,” Caldato said in the post. “A great example is GitHub for Windows.”

"GitHub for Windows uses the Reactive Extensions for almost everything it does, including network requests, UI events, managing child processes (git.exe),” said Paul Betts a .NET developer at GitHub is quoted as saying in the Microsoft post. “Using Rx and ReactiveUI, we've written a fast, nearly 100 percent asynchronous, responsive application, while still having 100 percent deterministic, reliable unit tests. The desktop developers at GitHub loved Rx so much, that the Mac team created their own version of Rx and ReactiveUI, called ReactiveCocoa, and are now using it on the Mac to obtain similar benefits."

Scott Weinstein, a principal and practice head at Lab49, said in the post: “Rx has proved to be a key technology in many of our projects. Providing a universal data access interface makes it possible to use the same LINQ compositional transforms over all data, whether it’s UI-based mouse movements, historical trade data, or streaming market data sent over a Web socket. And time-based LINQ operators, with an abstracted notion of time make it quite easy to code and unit-test complex logic.”

And Netflix Senior Software Developer Jafar Husain added: "Rx dramatically simplified our startup flow and introduced new opportunities for performance improvements. We were so impressed by its versatility and quality; we used it as the basis for our new data access platform. Today, we're using both the JavaScript and .NET versions of Rx in our clients, and the technology is required learning for new members of the team."

The Rx offering on CodePlex includes a series of libraries, such as:

Rx.NET: The Reactive Extensions (Rx) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators.
RxJS: The Reactive Extensions for JavaScript (RxJS) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators in JavaScript which can target both the browser and Node.js.
Rx++: The Reactive Extensions for Native (RxC) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators in both C and C++.

Source: eWeek

Friday, November 2, 2012

VMware expands Redis open source in-memory data store programming options


The Redis in-memory data store update adds support for scripting and bit-wise operations



Redis, an open source in-memory data store maintained by VMware, has been upgraded to be more stabile and make more judicious use of memory, two traits that should make it more appealing for enterprise deployments.

"Redis 2.6 is more mature than Redis 2.4 in many ways, and users will have a better overall experience," said Salvatore Sanfilippo in an email interview. Sanfilippo is a VMware open-source developer who authored Redis.

"We already see that Cloud Foundry users love Redis for its simplicity of use. We anticipate this will only increase with Redis 2.6," Sanfilippo wrote, referring to how VMware offers the data store as part of its Cloud Foundry PaaS (platform as a service) offering.

One of a growing number of NoSQL databases, Redis is an advanced key store, one that can accept keys in a wide range of formats, including strings, hashes, lists and other formats. Because of the unique trait, Redis allows complex operations to be executed on the server, minimising the workloads on less-efficient clients.

"Redis is particularly suited for tasks where there is a very high load in general, and especially for very write-heavy workloads, where the data set size is in a range suitable to be stored in-memory," Sanfilippo wrote. "Because the Redis data model is different and exposes an API to manipulated fundamental data structures, there are problems that are simpler to model with Redis."

One job that Redis is particularly well suited for is real-time analysis of data, Sanfilippo said. The Redis data store, which is usually run entirely in memory, can easily work in conjunction with another on-disk data store that would hold a much larger collection of data.

"Just as PostgreSQL was the basis for so many leading analytic relational DBMS', Redis is being adapted for a variety of NoSQL-style products," said database industry analyst Curt Monash. In addition to real-time analysis, Redis is also frequently used as a caching layer, like "memcache on steroids," Sanfilippo said, and even as a messaging system. "Both types [of applications] depend on writing data quickly into simple data structures," Monash added.

The new features with the Redis 2.6 release offer a wider range of capabilities to help in these duties. For this release, significant parts of the Redis core engine were rewritten.

Google Code-in Contest for High School Students Starts this November


Google announced that its Google Code-In contest for 13- to 17-year-old students will begin Nov. 26.



Google has announced that its third annual Google Code-In contest for teenagers will kick off Nov. 26.

Google Code-In is an international contest to introduce 13- to 17-year-old pre-university students to open-source software development. Prizes include certificates and T-shirts, and 20 grand-prize winners will win an all-expenses-paid trip to Google headquarters in Mountain View, Calif., next spring for themselves and a parent or legal guardian. Last year, 542 students from 56 countries participated.

According to Google, the goal of the contest is to give students the opportunity to explore the many types of projects and tasks involved in open-source software development. Globally, open-source software development is becoming a major factor in all industries, from governments, health care, and relief efforts to gaming and large tech companies. The IT industry is always looking for developers, and programming jobs typically rank at the top of the most in-demand positions according to job reports posted by eWEEK and other outlets. As such, programs like those at Google help to cultivate developers of the future.

From late November to mid-January, students will be able to work with 10 open-source projects on a variety of tasks. These projects have all successfully served as mentoring organizations working with university students in the Google Summer of Code program.

Google’s highly touted Google Summer of Code is an annual program, first held from May to August 2005, in which Google awards stipends (of $5,000 as of 2012) to hundreds of students who successfully complete a requested free and open-source software coding project during the summer. The program is open to students aged 18 or over. A similar program, the Google Highly Open Participation contest ran in 2007, and in 2010 Google changed the format slightly and the Google Code-In program was born. Now in its third year, the Google Code-In contest continues to reach students from around the globe. There have been 904 students from 65 countries that completed tasks in the Google Code-In contest in the 2010 and 2011 editions of the program.

Meanwhile, for the Google Code-In, the types of tasks students will be working on will fall into the following categories:

Code: Tasks related to writing or re-factoring code;
Documentation/Training: Tasks related to creating/editing documents and helping others learn more;
Outreach/research: Tasks related to community management, outreach/marketing, or studying problems and recommending solutions;
Quality Assurance: Tasks related to testing and ensuring code is of high quality; and
User Interface: Tasks related to user experience research or user interface design and interaction.
“Over the last two years we have had 904 students compete in the contest from 65 countries,” said Stephanie Taylor, a program manager in the open-source team at Google, in a blog post. “This past January we announced the 10 Grand Prize Winners for the 2011 Google Code-In. In June, we flew the winners and a parent/legal guardian to Google's Mountain View, Calif., headquarters for a five-day/four-night trip, complete with an awards ceremony, talks with Google engineers, Google campus tour and a full day of fun in San Francisco.”

Taylor called on teachers to get involved. “If you are a teacher that would like to encourage your students to participate, please send an email to our team at ospoteam@gmail.com,” she said. “We would be happy to answer any questions you may have.”

Wednesday, October 17, 2012

New Revolutionized Git is here


The movement from centralized to distributed VCS is accelerating. Enterprises and tool vendors are catching on and catching up with what open-source developers have been doing for a while. And at the front of the parade is Git.



Version control used to occupy a sleepy corner of programming technology. After Subversion replaced CVS, it became the de facto VCS system for open-source projects and in small IT organizations. Large enterprises continued doing what they always did, which was to rely on larger packages such as Rational ClearCase and Perforce to handle the massive code bases and to enforce important constraints such as access rights, partial codebase visibility, and so forth.

The world stayed in this happy steady state until the 2004-2005 timeframe, when one of the preferred VCS vendors of the open-source community, Bitkeeper, decided to withdraw its free offering. This event prompted two developers on the Linux project to create competing distributed VCS (DVCS) systems: Git and Mercurial. Both products offered similar functionality. The notable enhancement they both offered was that they blurred the concept of a central master copy, they made it easy to create forks and branches, and to do merges. In sum, both products were ideal for open-source projects. Pretty soon, communities sprang up around each one: Bitbucket became the premier host for Mercurial, GitHub the leader in the Git market.

Each DVCS had marquee projects: Git had Linux, Mercurial had Mozilla, most of Sun’s OSS, and more recently, Google’s Go. The general perception was that, while they offered similar functionality, Git was not Windows-friendly, whereas Mercurial was.

Git eventually move squarely into first place, as the result of the great popularity of GitHub, the project-hosting site that offered many free services attractive to OSS projects. It built a strong community, which BitKeeper — its Mercurial counterpart — could not match. (The GitHub-Bitkeeper competition is back in play as a result of BitKeeper now supporting Git in addition to Mercurial. BitKeeper also provides one attractive feature missing from GitHub: unlimited free private repositories.)

Even Mercurial advocates admit that Git is now the unquestioned DVCS champion. But most of them see no need to migrate from Mercurial. However, migration from pre-DVCS systems is indeed the order of the day. Our lead feature this week describes how Atlassian moved the codebase for its JIRA product from Subversion to Git. The net effect of the move was that Atlassian is able to release more often because its workflows were improved in ways that would have been very difficult to do with Subversion. The details of the specific benefits and the challenges of the migration are all spelled out in the article.

For large enterprises, Git has some limitations that need attention. And indeed traditional tool vendors are starting to provide the missing pieces. For example, Git repositories become unwieldy once they cross 1GB in size. GitHub strongly encourages customers to break up their repositories into sub-repositories to keep each one under the 1GB threshold. However, for large codebases, 1GB is a small number. To address this, Perforce has just released a Git front-end to their well-known VCS, which is distributed and routinely handles codebases in the 10s and 100s of GBs. (By the way, Perforce offers their VCS with the Git front-end free to teams of up to 20 developers.)

Managing Git is another issue for which tools are beginning to appear. For example, Atlassian just released Stash, which helps sites run Git behind the firewall. It provides management tools as well as features such as fine-grained access control. (Forking a project is fine in OSS, not so fine within the enterprise!) The latter is particularly important for outsourced work, where you want developers to access only the portion of the code they need to see for their work.

A final issue being tackled by vendors is that the DVCS model muddles the concept of the golden master, which is a primary architectural feature of the centralized VCS model. Many organizations are not comfortable with the uncertainty around this point, and so tool vendors are providing a cross-over kind of model where the concept of golden master is maintained, but the remaining benefits of the DVCS model are installed and enjoyed.

In much the same way Subversion cleared out all the old CVS-based projects, Git will do the same to Subversion. How Mercurial fares in that scenario is hard to tell. Either way, though, the DVCS revolution is here to stay.

Lavastorm Analytics Joins the Power of R Programming with its Visual Analytic Modeling Products


New R Analytics Pack Allows Business Analysts and R Programmers to Rapidly Combine Statistical and Predictive Analytic Models with Broader Business Intelligence Programs



BOSTON — Lavastorm Analytics, a leading global analytics software company, announced today that the award-winning Lavastorm Analytics Platform and its core Lavastorm Analytics Engine can now directly support statistical and predictive data analytics models written using the open source R programming language. The support comes via the new R Analytics Pack in the Lavastorm Analytics Library that allows enterprises to directly integrate analytics written in R into broader business intelligence (BI) and business data analytics applications using Lavastorm’s drag-and-drop visual discovery interface.

As enterprises increasingly add functions like linear regressions, clustering, and financial modeling to their ongoing analytics programs to become more agile, data-driven businesses, the use of R language-based analytics is rapidly rising. The new R Analytics Pack brings the full capabilities of the Lavastorm Analytics Engine to R programmers, including the ability for them to bring together disparate data, and profile, inspect, and cleanse the data themselves. In addition, the R Analytics Pack extends the Lavastorm Analytics Engine’s core functionality and enables business analysts to perform more statistical, financial, and predictive analyses in tandem with traditional business process analytics to optimize key functions like competitive pricing and customer churn avoidance.

“R-based statistical analyses have emerged as a highly popular language for analyzing cross-functional business processes and a critical tool for complex, Big Data analytics,” said Carl Lehmann, Research Manager, Enterprise Architecture, Integration and Business Process Management at 451 Research. “The trend now is to make investments in R-based analytics more accessible to business analysts, not just the programmers that created them, so that the analytics can be more broadly used across the business to analyze and control various operations, and Lavastorm’s unique visual interface will help this greatly.”

In addition to broadening enterprises’ BI abilities, using the R Pack within the Lavastorm Analytics Engine allows users to quickly and easily turn complex R-based programming code into reusable analytic building-blocks. Lavastorm users can store their R analytics in pre-configured, drag-and-drop functions that can then be deployed with a single click of the mouse to perform customized and persistent analysis of specific business processes.

“R’s emergence has been a natural extension of the explosive rise in use of analytics to optimize business performance,” said Drew Rockwell, CEO, Lavastorm Analytics. “It provides extremely powerful computation functionality, and is designed to be extensible through community-driven libraries like our own Lavastorm Analytics Library. But the ability to connect R’s benefits to broader BI practices has left something to be desired. Our new R Analytics Pack not only extends the Lavastorm platform to encompass completely new types of analyses through R, but also makes R’s unique abilities truly accessible to business analysts. This will be invaluable for enterprises grappling with Big Data challenges.”

The release of the R Analytics Pack follows Lavastorm’s launch of the Enhanced Analytics Pack earlier this year, and furthers the company’s modular approach to introducing new analytic and data integration functions to the Lavastorm Analytics Library. The new R Analytics Pack, replete with training video, is available immediately for use via the Lavastorm Analytics Engine, which has proven capable of performing analytics of complex business data and processes up to 90 percent faster than SQL-based tools.

Wednesday, October 3, 2012

#TypeScript: New Programming Language by Microsoft

Download TypeScript Application-scale release of cross-platform JavaScript language



Microsoft released a new open source, standards-based, general-purpose programming language that extends the capabilities of JavaScript. TypeScript has been unveiled by Soma Somasegar, corporate vice president of the developer division at Microsoft, and described as a means of writing cross-platform JavaScript to run in any browser on any device running standard JavaScript.

As an "application scale" offering, Microsoft says that the release of TypeScript comes in direct response to the challenge of creating larger-scale JavaScript applications.

NOTE: In search of a definition for what constitutes a "large-scale" JavaScript application, developer programs engineer at Google Addy Osmani writes, "In my view, large-scale JavaScript apps are non-trivial applications requiring significant developer effort to maintain, where most heavy lifting of data manipulation and display falls to the browser."

In terms of form and function, TypeScript uses the same JavaScript syntax and semantics. It compiles to clean, standard, broadly compatible JavaScript. TypeScript starts and ends as JavaScript, adding only what is needed for large-scale web client applications to the JavaScript standard, so developers will be able to reuse existing JavaScript.

Microsoft is also releasing a TypeScript for Visual Studio 2012 plugin that the firm hopes will improve developers' experiences with the tooling and further help address the challenge of building large-scale JavaScript applications. The TypeScript plugin provides developers with code navigation, refactoring, static error messages, and IntelliSense.


Somasegar says that JavaScript was originally designed to be a client-side scripting language for web pages: "For many years it was limited to event handlers that scripted a Document Object Model (DOM). As a result, JavaScript is missing many of the features necessary to be able to productively write and maintain large-scale applications, namely those that create distinct contracts between components and developers."

"TypeScript is a superset of JavaScript that combines type checking and static analysis, explicit interfaces, and best practices into a single language and compiler. By building on JavaScript, TypeScript keeps you close to the runtime you're targeting while adding only the syntactic sugar necessary to support large applications and large teams. Importantly, TypeScript enables great tooling experiences for JavaScript development, like those we've built for .NET and C++ and continue to innovate on with projects like "Roslyn". This is true whether you're writing client-side JavaScript to run on Windows, Internet Explorer, and other browsers and operating systems, or whether you're writing server-side JavaScript to run on Windows Azure and other servers and clouds," said Somasegar.

As stated, TypeScript is open source and the compiler is available under the Apache 2.0 license on CodePlex, Microsoft's free open source project-hosting site.

An Open Source Dyslexic Font


Programmers seem to be prone to dyslexia, or is it that dyslexics are prone to programming? Whatever the cause, an open source dyslexic font is welcome news. FONT




Now we have an open source font that claims to be dyslexic friendly. It is a modification of the Bitstream Vera font to give each letter a "heavy bottom". The font has been made by Abelardo Gonzalez, a New Hampshire-based app designer, who released his designs onto the web at the end of last year. Since then the Creative Commons licensed font has been downloaded more than 12,000 times, which isn't a lot, but it also has been built into a number of applications - WordSmith, the openWeb browser, various e-readers and now Instapaper.
The idea of the "heavy bottoms" is that it makes each letter less symmetrical in and adds "gravity" so that you can't rotate the letters. The letter shapes are also supposed to stop flipping e.g. p to q and b to d for example.

The only problem with this really great idea, and it is difficult to criticize any noble cause, is that it hasn't been tested. There are some personal commendations from dyslexics on the site and there is a plan for a specialist school to test the font in the future but at the moment you have to look at it and decide for yourself if it works.

The heavy bottoms remind me of a failing old mechanical typewriter where the tops of the letters are fading out because of broken mechanism or drying out ribbon. Overall, it was a relief to get back to a standard font and certainly wouldn't want to use the dyslexic for coding.
But this is a personal opinion and dyslexia is as varied a problem as you can find. The best thing is to try it out.

Tuesday, September 18, 2012

Digia finally acquires the Qt


Digia today announced that it has completed the acquisition of the Qt software technologies and Qt business it announced in August 2012. As Digia now becomes responsible for Qt activities including product development and commercial and open source licensing, the acquisition paves the way for Qt to become the world’s leading cross-platform application development framework across desktop, embedded and mobile platforms.



Used by over 450,000 developers worldwide, Qt is a full framework that enables the development of powerful, interactive and platform-independent applications. Qt is a proven and solid technology used across many different industries to provide extensive applications beyond what is possible with many other cross-platform technologies. Qt applications run native on the host system, delivering performance that is far superior to any other cross-platform application development framework. With Qt, developers can not only take full advantage of the multicore processors in desktops and high-end smartphones, but also deliver impressive graphical performance on low-end and low cost embedded hardware. Qt’s support for multiple platforms and operating systems allows developers to save significant time related to porting to other devices. Once developed on a Windows desktop PC, for example, an application’s code can be re-used on a smartphone, tablet computer or embedded device.



Qt comes with a built-in tool-chain, IDE and an extensive set of C++ frameworks. The Qt Quick technology allows for rapid and easy creation of dynamic and interactive user interfaces. Using Qt Quick’s declarative programming method, which couples QML and JavaScript with a powerful C++ engine, leads to optimally performing and visually stunning user interfaces. Qt's built-in features such as full support for HTML5 through the Qt WebKit module provide an easier and all-in-one solution for hybrid application development where pure web technologies fall short.

Qt already runs on leading desktop, embedded and real-time operating systems, as well as on a number of mobile platforms. Digia has initiated projects to deliver full support for Android and iOS mobile and tablet operating systems within Qt and will present the product roadmap and strategy later in the fall. Adding this support will extend one of the most popular and powerful development frameworks to cover the fastest growing and most popular smartphone and tablet operating systems.

Digia will continue the work with ecosystem members and the Qt community to secure a successful release of Qt5 and is committed to continuing the Qt Project in order to maintain Qt’s availability under both open source and commercial licences. Digia now has over 200 dedicated people working on Qt. The 89 Qt team members who have now joined Digia include many key players of the Qt community, of whom Lars Knoll, Chief Maintainer of the Qt Project, is one of the most experienced members of the Qt ecosystem.

Lars Knoll, Chief Maintainer of Qt Project, Digia commented: "I look forward to the exciting next chapter for Qt together with Digia. Qt 5 Beta was released during the acquisition process and we are working hard to reach a final release within the next months. Qt 5 is a huge step forward for Qt and, with technologies such as Qt Quick or the improved WebKit module, Qt is making many things trivial that are very hard to achieve with other technologies. Qt being developed as an open source project, with a vibrant and strong community behind it, greatly increases the strength of the technology. I am now very much looking forward to bringing the product to new platforms such as Android and iOS."

Tommi Laitinen, SVP, International Products, Digia, commented: "Qt has already proved to be exceptionally popular with developers because of its easy-to-use libraries and ability to generate efficient, high performance applications in minimal time. Our intention is to continue to work towards the goal set out by Trolltech, Qt’s originators, to develop a framework that enables users to ‘write code once and deploy everywhere’. Adding support for Android and iOS, together with the most widely used embedded operating systems, will give us the potential to target hundreds of millions of products and to help make Qt the dominant force in the software development world."

9 key career issues for the software developer


The path from birth to death is filled with choices about where to work and what kind of work to do. Sometimes the world is nice enough to allow us some input. These days, developers have a lot more say in their employment, thanks to rising demand for their services.



Whether you’re an independent contractor or a cubicle loyalist with a wandering eye, programming want ads abound, each stirring its own set of questions about how best to steer your career. For some, this is entirely new territory, having fallen into employment with computers simply as a means to scratch an itch.
The following nine concerns are central to charting your career path. Some target the résumé. Others offer opportunities for career growth in themselves. Then there are questions of how to navigate unfortunate employment issues particular to IT. Thinking about your answers to these questions is more than a way of preparing for when someone comes to ask them. It is the first step in making the most of your interests and skills.

Will certification give you an edge? A common dilemma facing career-minded developers is how much attention to pay to certification. After all, employers always want to determine if you really know what you claim to know, and tech companies are always stepping forward with certificate programmes to help them.
The programmes are aimed at teaching a given technology, then testing your competence with what was taught. They focus on practical solutions, not theoretical conundrums like most university courses. Thus, they appeal to companies looking to vet candidates in their ability to deliver real-world solutions.
The key question for developers, though, is whether there’s any real demand for a particular certificate. Most cutting-edge technologies are too new to be testable, so employers look for other evidence of ability. The real market for certification will always be in bedrock tools like running an Oracle database or maintaining a fleet of Microsoft boxes. Companies that depend on Oracle or Microsoft will usually pay extra for people who’ve already demonstrated a skill. When your certification and the employer’s needs align, everyone is happy.

But developers need to choose carefully. Preparing for exams takes a fair amount of time, and the questions often test trivial knowledge — the kind usually provided by automatic tools built into today’s powerful editors.

Certificates often have a limited window of usefulness as well. Being an expert on Windows XP was great 10 years ago, but it won’t help much today — unless the company is sticking with XP until the bitter end. You can often find yourself getting certified in versions 1.0, 1.1, 1.2 of a product.

What is the true value of a computer science degree? If it’s hard to discern whether a professional certificate for a particular technology is worth earning, it’s almost impossible to decide whether to invest in traditional collegiate degrees. All it takes is one look at leaders like Steve Jobs, Michael Dell, Bill Gates, or Mark Zuckerberg to know that a bachelor’s degree is not a prerequisite for changing the world.

But traditions die hard. Some companies simply insist on a bachelor’s or even a master’s degree because it’s an easy way to cut their pile of résumés, or offers a measure of some intangible quality like a deep interest and versatility in working with computers. Whatever the reason, a significant number of people continue to believe that a sheepskin is essential, so developers with an eye on the want ads encounter the dilemma to stock up on diplomas time and again.

The practical value of a collegiate degree is controversial. Some find the typical university curriculum too focused on theoretical questions about algorithms to be a meaningful benchmark in the workplace. The professors are more interested in wondering whether the running time can be predicted with a polynomial or an exponential function.

Others believe that this abstract understanding of algorithms and data structures is essential for doing a good job with new challenges. Languages come and go, but a deep understanding lasts until we retire.

Should you specialise or go broad when it comes to programming languages? A good developer can program in any language because the languages are all just if-then-else statements wrapped together with clever features for reusability. But every developer ends up having a favourite language with a set of idioms and common constructs that are burned into the brain.

The challenge is to choose the best one for the marketplace. The most demand will be for languages that form the foundation of the big stacks. Java, C++, PHP, and JavaScript are always good choices.

But the newer languages are often seductive. Not only do they solve the problems that have been driving us nuts about older languages, but no one has managed to articulate the new aggravations they offer.

Employers are often as torn as developers when it comes to committing to a new language. On one hand, they love the promise that a new programming language will sweep away old problems, but they’re also prudent to be sceptical of fads. A technology commitment could span decades, and they must choose wisely to avoid being shackled with a onetime flashy language that no one knows any more.

For developers, the best position is often to obtain expertise in a language with exploding demand. Before the iPhone came out, Objective-C was a fading language used to write native applications for the Mac. Then things changed and demand for Objective-C soared. The gamble for every developer is whether the new FooBar language is going to fade away or explode.

Should you contribute to open source projects? The classic stereotype of open source projects is that they’re put together by unkempt purists who turn up their nose at anything to do with money. This stereotype is quickly fading as people are learning that experience with major open source projects can be a valuable calling card and even a career unto itself.

The most obvious advantage to working on an open source project is that you can share your code with a potential employer. If you’ve achieved committer status, it shows you work well enough with others and know how to contribute to an ongoing project. Those are valuable skills that many programmers never develop.

Some of the most popular open source projects are now part of enterprise stacks, so companies are increasingly looking for developers who are part of the community built around the open source projects on which their stack depends. One manager at a major server company told me he couldn’t afford to hire Linus Torvalds, but he needed Linux expertise. He watched the Linux project and hired people who knew Linus Torvalds. If the email lists showed an interaction between Torvalds and the developer, the manager picked up the phone.

Many open source projects require support, and providing this can be a side job that leads to a full-time career. Companies often find it much cheaper to adopt an open source technology and hire a few support consultants to make it all work, rather than go proprietary.

Savvy programmers are also investing early in open source projects by contributing code. They can work on cutting-edge open source projects on the side just because they’re cool. If the project turns into the next Hadoop, Lucene, or Linux, they’ll be able to turn that experimentation into a job and, quite possibly, a long-lasting career.

How do you work around ageism?                                                                                                                                                                                        
What does every tech recruiter want? An unmarried 21-year-old new graduate of a top computer science institution ready to work long hours and create great things. What about a 22-year-old with a year of experience? Uh. Maybe. Perhaps. Are there any 21-year-olds available?

One of the great, often unspoken, rules of the programming world is that managers have very narrow ideas of the right age for a job. It’s not that managers want to discriminate, and it’s not that humans want to change as they get older — but they do. So everyone clings to stereotypes even if they’re against the law.

This is often most obvious in the hypercompetitive world of tech startups, where the attitude is like the NBA. If you got stuck finishing your degree, you’re obviously not special enough. This world prizes people who spend long hours doing obsessive things. They like youth, and it’s not uncommon to hear venture capitalists toss aside anyone who’s not a younger 20-something.

The good news for programmers is that some employers favour older, more mature people who’ve learned a thing or two about working well with others. These aren’t the slick jobs in the startup world that get all the press, but they are often well-paying and satisfying.

The savviest programmers learn to size themselves up against the competition. Some jobs are targeted at insanely dedicated people who will stay up all night coding, and older programmers with new families shouldn’t bother to compete for them. Others require experienced creatures, and young “rock star” developers shouldn’t try to talk their way into jobs with bosses who want stable, not blazing and amazing.
How much does location matter? If you’re young and willing to pack everything you own into the back of your car and move on, the only thing that’s important about the location of a job is whether you like the burrito place next door. Good food and pleasant surroundings is all that matters.

But where to seek your next job becomes a trickier question when you can’t pack up your car in 10 minutes. If you have a family or another reason that makes a nomadic coding life difficult to impossible, you have to think about the long-term stability of a region before committing to a new employer.

Many programmers in Silicon Valley move successfully from startup to startup. If one doesn’t work out, there’s another being formed this minute. There’s plenty of work in different firms, and that makes it easy to find new challenges, as we’re taught to say.

This may be the major reason that some firms have trouble attracting talent to regions where there’s only one dominant player. If you move to Oregon or Washington and the job doesn’t work out, you could be moving again.

Can you choose a niche to avoid the offshoring ax? Lately many programmers have begun to specialise in particular layers. Some are user interface geniuses who specialise in making the user experience simultaneously simple and powerful. Others understand sharding and big data.

The growth potential of a career in a certain layer of the stack should always be considered, in particular in relation to its vulnerability to offshoring. Some suggest that user interfaces are culture-dependent, thereby insulating user interface jockeys from offshoring pressure. Others think it’s better to pick the next big wave like big data warehouses because a rising tide lifts all boats.

While change is a constant in IT, it may not be practical to jump on every wave. If you have terrible aesthetic judgment, you shouldn’t try to be a user interface rock star. Nor does it make much sense to try to sell yourself as a big data genius if you find a statistics textbook to be confusing. There are limits to how much you can steer your career, but there are ways to play a new wave to your strengths — and to make yourself immune to offshoring.

Should you strike out on your own? An increasingly common career dilemma is whether to stay full-time or switch to being a contractor. Many companies, especially the bigger ones, are happy to work with independent contractors because it simplifies their long-term planning, allowing them to take on projects without raising the ire of executives who sit around talking about head count.

The biggest practical difference is figuring out health insurance and pension benefits. An independent contractor usually handles them on their own. Some find it to be a pain, but others like the continuity they get by keeping the same independent health insurance or pension plan when they switch contracts.

Another big difference is in what you like to do. Regular employees are often curators and caretakers who are responsible for keeping everything running. Contractors are usually builders and problem solvers who are brought in as needed. Those aren’t absolute rules, but for the most part, those who stick around get saddled with maintenance.

Because of this, contractors are often free to specialize in particular technologies, while employees end up specialising in keeping the company running. Both may sell themselves as experts in Oracle or Microsoft or Lucene, but employees are the ones tasked to get a project up and running because the boss needs it by next Friday.

Depending on the culture of the employer, this could mean broad experimentation for full-time employees or an increased likelihood of tending outdated enterprise software far longer than anyone might want.
Is there work beyond tech? Most programmers often forget there are many jobs for programmers in companies that have little to do with technology. It’s easy to assume that programmers will always work in tech.

The smart programmer should realise that choosing a nontech employer provides unique career opportunities. These days almost every company requires computer-savvy employees and a strategy for making the most of computer systems. Sales forces need software for tracking leads. Warehouses need software for tracking goods. More often than not, someone has to customise these solutions to suit the needs of the business.

Understanding a company’s business and technology is one of the best defences against outsourcing. Knowledge about many of the popular tools often becomes commoditised, and that often means competing with programmers overseas with much lower costs. But knowledge of two (or more) different realms is not a commodity, and it’s hard to replace.

Smart companies will often create managerial tracks for technology specialists if it’s clear that technology will be a key part of its future. A company with a heavily computerised warehouse would be a great management opportunity for technologists because the software development the company does in the future will be a big part of their future strategy. Tech specialists can often play key roles in nontech companies.

The key question is how willing you are to learn the business, whatever it may be. If you just want to talk about pointers and data structures, stick with the tech company. But if you are naturally curious about warehouse design and have always had a thing for other aspects of business beyond IT, recognise that computer-savvy people are much in demand in other sectors as well.

Friday, September 14, 2012

Open source education software unveiled by Google

Online education startups such as the Khan Academy, along with new efforts by MIT, Stanford, and Harvard have helped spur interest in and add legitimacy to the notion of remote learning. Now Google is lending its brainpower to the rapidly growing area by releasing a tool called Course Builder, open source software designed to let anyone create online education courses.


The Course Builder project came by way of another program Google ran earlier this year called Power Searching With Google. The Massive Open Online Course (MOOC), which attracted approximately 155,000 students from 196 countries, allowed Google to marry some of the practices now common to online instruction with the company's robust array of collaboration and communication tools. A new Power Searching session begins in two weeks.

According to the introductory video (above), presented by Peter Norvig, director of Google Research, usage of the software won't require high-level programming skill, and should be accessible to anyone with the ability to build and maintain their own website.

"The Course Builder open source project is an experimental early step for us in the world of online education," Norvig said. "It is a snapshot of an approach we found useful and an indication of our future direction. We hope to continue development along these lines, but we wanted to make this limited code base available now, to see what early adopters will do with it, and to explore the future of learning technology."
In addition to offering a new platform for empowering educators, the effort is also a unique opportunity to connect with Google's research team. Over the course of the next two weeks, Google plans to directly interact with Course Builder users via Google Hangouts. The Course Builder support site is already live and the free software download has already received its first update. For those unsure about their level of skill in relation to use of the software, Google's Course Builder Checklist offers a reassuring primer on how to get started and exactly what to expect.

Monday, September 10, 2012

Where NASA and Instagram get open source databases


The PostgreSQL Global Development Group has announced the PostgreSQL 9.2 open source database with native JSON support, covering indexes, replication and performance improvements.



NOTE: JSON (JavaScript Object Notation) is a text-based data-interchange format programming language that is widely agreed to be "easy for humans to read and write" and is equally easy for machines to "parse and generate" in use. It is based on a subset of the JavaScript programming language and is said to use conventions that are familiar to programmers of the C-family of languages.



But what does "performance improvement" really mean with this kind of technology?

Vendors of all kinds love to use the term "performance" time and time again, so what makes an open source next-generation database operate and, well, perform, so well in this case?

How it works...

The answer appears to lie in PostgreSQL 9.2's ability to execute "linear scalability" across 64 processor cores to share out the burden of processing. This separation of workloads... plus its 'index-only' scans and reductions in CPU power consumption are what actually speed up this product.

NASA, Instagram and HP can't be wrong? Can they?

Organisations including the U.S. Federal Aviation Administration, Instagram and NASA run applications on PostgreSQL and HP has adopted it too to power its HP-UX/Itanium solutions.

Improvements in vertical scalability are also said to increase PostgreSQL's ability to efficiently utilise hardware resources on larger servers.

So just how fast is fast here?

Numerically, this means:

* Up to 350,000 read queries per second
* Index-only scans for data warehousing queries
* Up to 14,000 data writes per second

NOTE: PostgreSQL is an open source object-relational database system. It has more than 15 years of active development and runs on all major operating systems, including Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS X, Solaris, Tru64) and Windows.

Saturday, September 8, 2012

Teach Your Kids Basic Programming With Super Scratch Programming Adventure


If you think you might have a future programmer on your hands, it’s time to introduce your kid to Scratch. It’s a programming language that teaches the concepts of programming to young kids while making it easy for them to create animations, games, and more, then share them all with friends online. A new book from No Starch Press, Super Scratch Programming Adventure!: Learn to Program By Making Cool Games makes it even easier to get started.



Scratch was created at MIT Media Lab’s Lifelong Kindergarten Group. It’s targeted at ages eight and up, although my six-year-old finds it to be a lot of fun. If your child can read, understand numbers, and control a mouse, he can probably get started with Scratch, particularly if he’s used creative software like drawing programs. Younger children just may need a little more help than the older ones. To create programs in Scratch, you simply drag and drop colored puzzle-piece blocks of code written in simple language, snap them together, then change the variables:

Super Scratch Programming Adventure! helps your budding developer learn to use Scratch with a comic book story. Each section begins with a continuing piece of a story that ends by giving the reader a problem to solve with Scratch. Along the way, kids learn about software-building terms like “sprite,” “loop,” and “variable.” At the end of the book, they are rewarded with the fruits of their own creation, a game they can play knowing they made it themselves.

The best part about using Scratch is the wide range of skills that are involved, from the left-brain creativity to the right-brain reasoning. It also encourages sharing, collaborating, and learning from others–all additional non-programming skills useful in the programmer’s world.

Super Scratch Programming Adventure! was written by the LEAD Project (Learning through Engineering, Art and Design), which began in 2005 as a collaboration among The Hong Kong Federation of Youth Groups, the MIT Media Lab, and The Chinese University of Hong Kong.

Open-Source Platform for API Management launched by Alcatel-Lucent


In a bid to make it easier for operators to open up their networks to developers, Alcatel-Lucent has introduced an open source and cloud-based API (application programming interface) management platform called apiGrove.



Operators have largely missed the boat on the smartphone application revolution. But by publishing APIs and opening up their networks -- including features such as billing, location and presence information -- to developers, operators can get a second chance to play a more important role.

However, for the promise of APIs to be fully realized, there needs to be standardization of the underlying technology, according to Laura Merling, senior vice president of Application Enablement at Alcatel-Lucent.

"I believe that API management is going to be a part of everybody's business and therefore it needs standardization, and also needs community input on what it should look like," said Merling.

ApiGrove will make it easier for operators to try the concept without having to make much initial investment, according to Merling.

An API mangement platform is used to publish the interfaces, as well as configure rules for how they can be used, including who can access them and the number of transactions that are allowed.

The apiGrove installation package, source code, and documentation are available for download from GitHub. The ultimate goal with apiGrove is to turn in into an Apache project.

"It has been submitted to go through the necessary incubation process, but that takes time ... A lot of it is about community adoption, so we are doing a lot of outreach to drive adoption," said Merling.

Alcatel-Lucent also believes that API management makes sense running in a cloud, as opposed to on-premise in the operator's data center. In addition, ApiGrove could also manage the cloud platforms' own APIs, exposing their capabilities to developers.

The company is talking to a number of cloud providers, and is making interfaces for its own platform CloudBand manageable via apiGrove as well.

Alcatel-Lucent will also offer Premium API Management Platform before the end of the year. Operators will have to pay for this version, which will have more advanced cluster and security features including XML validation than apiGrove, according to Merling.

For example, the Speaker Manager feature lets an operator keep track of rate limiting across a cluster of API management instances.

The Premium API Management Platform will also include a service composition framework, which allows several API calls to be integrated and presented to developers and in the process make their work easier.

The service composition framework currently is being evaluated for being made open source, as well.

The end-game for Alcatel-Lucent is to help turn networks into a software platform, where all equipment comes with an interface for developers, Merling said.

Monday, September 3, 2012

How Twitter tweets your tweets with open source


when Twitter recently joined The Linux Foundation. You couldn't tweet about your dinner, your latest game, or the newest political rumor without open-source software.



Chris Aniszczyk, open-source manager at Twitter, explained just how much Twitter relied on open source and Linux at LinuxCon, the Linux Foundation's annual North American technology conference. “Twitter's philosophy is to open-source almost all things. We take our software inspiration from Red Hat's development philosophy: 'default to open.''”

Specifically, according to the company, “The majority of open-source software exclusively developed by Twitter is licensed under the liberal terms of the Apache License, Version 2.0. The documentation is generally available under the Creative Commons Attribution 3.0 Unported License. In the end, you are free to use, modify and distribute any documentation, source code or examples within our open source projects as long as you adhere to the licensing conditions present within the projects." Twitter's open-source software ware is kept on GitHub.

You're welcome to use this code. Indeed, Aniszczyk strongly encourages others to use and build on it.

Twitter itself is famous, or infamous in some circles, for having been built on Ruby on Rails. Today though Aniszczyk said, Twitter has moved to Java and a list of open-source programs longer than your arm.

If Unix and Linux are operating systems that are made of many utilities loosely coupled than Twitter is a social network made up of many open-source programs loosely couped together. Some parts will be familiar to anyone in Linux or Web development circles.

Twitter's core operating system is Linux 2.6.39 and for its core database it uses MySQL. To manage the source code for the rest Twitter uses Git. Linus “Linux” Torvalds' other software baby.

But, let's cut to the chase, what actually happens when you tweet?

First unless you've never used “The Twitter,” you know that a tweet is a short of 140 characters or about 200 bytes. When you send this tweet it will soon be “fanned out” to the people who read your tweets. Sound easy right? “Wrong!” Proclaimed Aniszczyk.

The problem is the Twitter's scale. Twitter handles 2.8-billion tweets during a typical year. That counts to 5,000 tweets a second on average. But, Aniszczyk said, things aren't always average. When someone noticed the singer Beyonce showing a baby bump, traffic went up to 8,800 Tweets per second (TPS). The last SuperBowl? 12,000-plus TPS, and when someone got the idea that everyone should go see an anime movie and then tweet about it, Twitter faced one of its greatest challenges: 25,088 TPS.

What happens with each of these tweets is they put are registered as a status update. Then each one is given a unique ID using a program called snowflake. Next, it's geolocation data is noted by Rockdove, a program that hasn't been made open-source yet.

Each tweet is then checked by a combination URL shortener and spam detector called t.co. Once past this stage, each tweet is stored in MYSQL by Gizzard, a flexible sharding framework for creating eventually-consistent distributed datastores. Now, and only now is an HTTP 200 signal, meaning all has gone well, to your Web browser.

Of course at this point your tweet hasn't gone out to the world. First, your tweets get started on their way to Bing and other search programs using the Firehose application programming interface (API). Finally, your tweets are ready for fanout, that is heading to your friends, family, and fans.

The actual process is handled by FlockDB. This is an open-source graph database that sits on Gizzard and pulls data from MySQL. FlockDB contains all of Twitter's users and their relationships to one another. Now, armed with the your followers addresses your tweets are finally on their way.

The average time all this takes? About 350-milliseconds. Not bad for a system handling 5,000 TPS every day, 24-hours a day.

Twitter may be causing some of its would-be partners grief with tighter API rules, but the company itself does an exceptional job of delivering thousands of messages every moment of the day with open-source software.

Tuesday, August 28, 2012

Twitter Joins Linux


Twitter uses and builds a fair amount of open-source software, so it wasn’t too shocking when we read in our inboxes this morning that the social media startup has joined the Linux Foundation.

“Not only is Twitter built on Linux, but open source software is core to its technology strategy,” said a Linux Foundation rep to VentureBeat via email.
“It’s investing even more in the platform now as the company evolves and positions itself for the future. Linux has become even more dominant among web-based companies as the ‘hacker way’ has become pervasive among the newest generation of startups.”
Ah, yes, the Hacker Way. Or should we say, the “Hacker Way.”
Espousing open-source ideals, at least in spirit, has become increasingly common among web startups, especially in the Bay Area. Facebook CEO Mark Zuckerberg famously wrote a “hacker way” mini-treatise into his company’s SEC IPO filing.
But in fact, while companies like Facebook and Twitter rely on open-source technologies and programming languages to get their various jobs done, their businesses are conceptually based on proprietary software, not open-source software.
As famous hacker Eric “esr” Raymond pointed out in a recent interview with VentureBeat, the true hacker way means “to give control to the individual, to respect his or her privacy, to create tools for autonomy and liberty, and to encourage creative re-use of software” — only parts of which are built into Twitter’s products.
While Twitter has open-sourced some of its software — a load-balancer called Iago and a design framework called Bootstrap, for example — vast expanses of Twitter code remain under lock and key, and the company’s recent and coming API changes means it’s getting farther away from anyone’s definition of free and open-source software.
This is a problem. Specifically, it’s a recruitment problem.
Twitter needs to continue to pull in the best, brightest, neckbeardiest developers the world can offer, and it can’t do so without some commitment to open-source communities. The company actually recently hosted an open-source event with thinly veiled recruitment mechanisms built in just for this reason: great developers and open-source software go together like peanut butter and jelly, and the more you can convince a great developer that your company believes in open-source, the more likely you are to recruit great developers in a highly competitive hiring environment.
All that being said, Twitter does have a vested interest in helping to advance the cause of Linux in particular, and some participation in open-source communities is better than none at all.
As a web-based business, Twitter, like every other web service, is supported by tens of thousands of Linux servers. In a statement on today’s news, the company said it intends to partner with the Linux Foundation to promote and protect Linux, the open-source operating system.
“Linux and its capability to be heavily tweaked is fundamental to our technology infrastructure,” said Twitter open-source manager Chris Aniszczyk in the statement.
“By joining the Linux Foundation, we can support an organization that is important to us and collaborate with a community that is advancing Linux as fast as we are improving Twitter.”

Friday, August 24, 2012

OpenCL SDK early released by Adapteva


Adapteva, a privately-held semiconductor technology start up, today announced that it is providing an early access release of an OpenCL SDK for the Epiphany multicore architecture. The OpenCL implementation was completed together with Brown Deer Technology, leading innovator in open-source heterogeneous computing.



OpenCLTM is an open, royalty-free standard for cross-platform, parallel programming that is now reaching widespread adoption in servers and handheld/embedded devices. OpenCL (Open Computing Language) provides a portable API for accessing the compute capabilities of a platform, accelerating performance in a wide spectrum of applications in numerous market categories from gaming and entertainment to scientific and medical software. With the OpenCL SDK, Epiphany programmers will be able to easily accelerate compute intensive tasks across an arbitrary number of cores on Epiphany based accelerator solutions.



“Adapteva’s Epiphany multicore architecture scales to 1000’s of cores on a single chip. Such massive parallelism requires a battle tested programming model that scales well. We chose OpenCL because it is an open standard with broad industry support and because it fits perfectly with our approach to heterogeneous computing”, said Andreas Olofsson, CEO at Adapteva. “We were very fortunate to be able to leverage the COPRTHR OpenCL implementation developed by Brown Deer Technology for ARM and x86 processors. The speed by which BDT ported its COPRTHR implementation to the Epiphany architecture and the quality of the results were simply outstanding and speaks volumes about the level of innovation and expertise at BDT and the maturity and flexibility of the Epiphany architecture.”

“Adapteva has delivered an architecture that supports massive on-chip parallelism with impressive power efficiency. OpenCL provides the perfect foundation for such a processor," said Dr. David Richie, President at Brown Deer Technology. "We leveraged this API in an SDK that provides programmers with a clear path for code development. Programmers will find their parallel algorithms mapping naturally to the Epiphany architecture. The chip is designed for massively parallel algorithms. Extracting performance from the architecture becomes relatively easy as a result of this design.”

Target Markets and Availability

The Epiphany OpenCL SDK is currently in Beta release and available to early access partners.
Coding examples and white papers will be published on Adapteva’s corporate site in the coming month.

Thursday, August 23, 2012

Typing.io Lets You Practise Typing In Programming Environments


Programming languages are difficult to wrap your head around at first; you need to train your muscle memory to insert different characters after typing lines. Typing.io is a tool to help budding programmers practise and become more efficient at coding.



Typing.io is not meant as a tool to teach you programming. It’s meant solely as a way to practise typing in different open-source coding environments. This includes Javascript, Ruby on Rails, Perl, C, among others. When you load one of the lessons, you simply need to type over the text on the screen. When you make an error, the text will turn red. If you’re new to programming and need help practising your skills, Typing.io is a great place to start.

Tuesday, August 21, 2012

Hadoop gets a Real-Time Processing by Open Source vets


Nodeable solves real time Big Data issues



Big Data is certainly on a lot of people's lips these days. There is no doubt that we are certainly generating lots of data. Analyzing that data and making it useful is fueling literally millions of dollars of investment in companies around Hadoop, NoSQL, etc. One area where Big Data has some challenges is real-time analysis. With all of that data, analyzing in real time to get actionable intelligence into the hands of users is a big challenge. That is the the challenge that Nodeable is seeking to tackle.



Nodeable is led by a couple of open source veterans. Dave Rosenberg, formerly of Mule Source among a few other open source projects, is the CEO of Nodeable. With him are several folks who have worked with him in his previous open source companies. Additionally, Matt Assay, another veteran open source company builder, is on board at Nodeable as well.

I had a chance to sit down with Dave and talk about what he and his team are doing with Nodeable. You can listen in on our 15 or so minute conversation below. Let me warn you, the audio is a bit uneven at some points, but it isn't too bad and I think the quality of the conversation is well worth the problem with the quality of the audio.

The Nodeable team is using an open source program called Storm, which was originally developed by some folks at Twitter. Nodeable is seeking to commercialize this and build on top of it. This is a model that Dave has followed in the past and has lots of experience with.

Nodeable has been kicking around for a while now, but only recently really went public with this model. It is not competitive with Hadoop or other Big Data solutions, rather it brings another needed facet of Big Data to analytics.

So have a listen to Dave and check out a new and different Big Data solution coming to market.

Open Source DAM Software's 1.5 version releases by Razuna


The newest version of the open source digital asset management (DAM) platform Razuna is now available.



Razuna ApS announced late last week the availability of version 1.5 of the platform, which can be downloaded, used via a hosted service or run on a dedicated cloud server.

Customization, Rendering Farm
Enhancements in the new release include an option to fully customize Razuna, an option to log in using social media username and password via the integrated Janrain plugin, a new Rendering Farm functionality, and major updates to search and the overall look and feel.

The Rendering Farm distributes the job of encoding many files to other servers, whether a dedicated one in-house or one in the cloud. Customization options include the ability to modify tabs, dialogs or look and feel, and the company said that the new caching system “dramatically improves” overall performance and supports such caching engines as Memcached and MongoDB.

Razuna now supports scheduled backups of assets and data within the platform, as well as the ability to export metadata to a spreadsheet.

There’s also additional support for cloud storage, such as Amazon S3, Nirvanix or Eucalyptus, and there’s a new version of the application programming interface (API), which facilitates integration into an organizational environment.

Partner Program Overhaul
CTO and Razuna founder Nitai Aventagiato told news media that, with the latest additions Razuna is “truly an enterprise level digital asset management software” that is available to companies of any size, via the hosted offering.

Later this month, a new partner and OEM program will be launched, which the company said was due to an increasing demand for OEM solutions. CEO Jens Strandbygaard said in a statement that the partner program is undergoing a complete overhaul, adding that the API has allowed software vendors “to embed Razuna deep into their existing technologies” to leverage enterprise-level DAM features.

Strandbygaard also said that resellers and systems integrators will benefit from the new program, “since they will be awarded a higher commission as well as being able to offer our enterprise package to large scale clients.”

The Denmark-based Razuna, founded in 2005, said that its platform is used by more than 5000 businesses worldwide every day.