The SOAP versus REST debate reminds me of the EJB versus POJO arguments years ago. When EJBs first appeared on the scene, the Java developer community embraced it with full vigor only to realize later that it was not the wisest thing to do. EJBs were over-complicated with convoluted, inter-twined component, security, configuration, transaction and application models. They were designed with very specialized, complex, enterprise-scale scenarios in mind - scenarios irrelevant to most applications. Frameworks such as Hibernate, Spring and JDO, and simpler design patters with POJOs showed the way towards a more elegant approach. As a result, the developer community today is far better off.
SOAP has the same pedigree as EJB. SOAP originated primarily to serve the need of .NET-to-Java integration within the enterprise firewall. The myriad of WS-* specs were geared towards complex security, reliability and transactions involved in inter-system communication within large enterprises. Like EJBs, SOAP was embraced quickly by the developer community since it represented the first real standard for interoperability since the failed promises of CORBA. With HTTP transport and XML-based payloads in typical implementations, SOAP seemed like the perfect solution for all inter-system communication. However, like with EJBs, the developer community has come to realize that perhaps we have been on the wrong track by adopting SOAP for everything. REST-full patterns have emerged that seem more appropriate for a vast majority of applications and are becoming more and more popular. The abuse of SOAP-based Web Services will hopefully continue to diminish. REST has done to SOAP what POJO frameworks did to EJBs.
One of the issues with REST is the lack of a formal standard. REST is simply an architectural pattern rather than a standard like SOAP. If you wanted to publish to the world a formal API to access your system, SOAP offers a well-understood, tooling-friendly WSDL format. REST has no such standard - until now. JSR 311 JAX-RS spec represents a first step in more formalized standards for REST which will go a long way towards its wider adoption.
Talking of standards, it was interesting to see an entire keynote from Microsoft at JavaOne 2009 (yes, the evil empire at JavaOne) addressing SOAP interoperability. After all these years of SOAP standards, we are still figuring out interoperability between .NET- and Java-based SOAP implementations! To me that says a lot about the complexity of the SOAP standards.
To be clear, SOAP does have its place. SOAP is useful in complex enterprise scenarios involving things such as:
- end-to-end rather than point-to-point security
- non-HTTP transports
- 2-phase commits across heterogeneous systems
- guaranteed reliability and SLAs in messaging
A good articulation of this is by David Chappell here.
It was heartening to see a great deal of interest in REST at JavaOne this year. Clearly the "REST-lers" have scored a point over the "SOAP-ists". This trend is likely to continue, and like at the end of the EJB saga, we developers will be far better off as a result.
Thursday, June 11, 2009
Friday, March 13, 2009
The Three Cs - Cloud, Collaboration, Client
If I were a venture capitalist looking to invest money in the software space, I'd look for innovative new companies that incorporate elements of "three Cs" - Cloud, Collaboration and Client. In my opinion, the three Cs represent the next big wave in the software space. They have the potential to significantly impact the way software is built and deployed, and the way users use software, interact with systems, and with each other. Let's examine the three Cs.
Cloud
Cloud can be defined as a collection of computing resources made publicly available for access through a standard Internet connection and using well-defined APIs. There are three aspects to the cloud - storage, applications and computing.
Cloud Storage - This is the basic and most common use of the cloud - a place to store electronic content. A classic example of cloud storage is online storage of photographs from your camera with sites such as Picasa Web. Another example is regular online backup of the contents of your computer using services such as Apple's iDisk. Amazon's S3 (Simple Storage Service) is an example of a general purpose cloud storage solution.
Cloud Applications - This is the next level of cloud usage - applications hosted on the cloud that can be used without any software installations on the user's local machine. Saleforce.com is perhaps the poster child for this with their successful hosted CRM (Customer Relationship Management) solution. The term SAAS (Software as a Service) is often used to describe such cloud applications. In the last couple of years we have also seen the emergence of application development frameworks such as Google's App Engine which offer a cloud-based platform for building your own applications that are then hosted on the cloud. The term PAAS (Platform as a Service) is often used to describe these offerings.
Cloud Computing - This is the third and perhaps the newest buzz in the cloud space - computing power available for rent in the cloud. You rent virtual machines online for whatever purposes you need, for however long you need, and pay according to usage. The term IAAS (Infrastructure as a Service) is a term that appropriately describes such services. Amazon's EC2 (Elastic Compute Cloud) is clearly the most prominent player in this space.
So is all the cloud excitement really just hype? Absolutely not. This thing is real and here to stay in a big way. Cloud's value proposition is quite simple and intuitive.
* With Storage, the value proposition is reduced cost, reliability, security, and universal access. I've been using an external hard-drive for backups at home which now roughly costs me the same as using Amazon's S3 (<10GB of data with infrequent transfer in and out of AWS, over a the 5 year lifespan of my drive). With S3, I don't have to worry about my drive failing or getting stolen (my 1TB drive does not have an easy password protection mechanism other than encrypting each file). And of course I cannot access my drive at home from any machine with an Internet connection.
* With Applications, the value proposition is TCO (Total Cost of Ownership), particularly for businesses. Traditional enterprise software has always been a TCO headache for companies that spend a huge amount of their resources installing, maintaining, configuring, upgrading, patching software and the associated hardware. Having someone do that for you cheaper taking advantage of economies of scale makes perfect economic sense. It also makes good business sense because it lets companies focus more on their core business.
* With Computing, the value proposition is cost, flexibility and reliability. Renting computing resources and paying as you go based on usage rather than having to buy, install and maintain hardware and systems software has both cost and reliability implications. I like to use the electric utility grid analogy here - buying electricity from the utility grid works out much cheaper and reliable than every home having its own electric power generator. The additional benefit is the flexibility of being able to scale up or down the computing resources you need as your loads fluctuate. Also, you can pick and choose from a variety of hardware and software configurations at any time. This is a boon for small startups that don't have to buy hardware and can rent just what they need or afford, scaling as they grow. In fact, it is possible to start a purely virtual company which does not own a single server box.
Some critics argue that all the elements of cloud have been around for sometime now and don't understand why there is all this hype suddenly as if a new technology revolution has taken place. While I agree partially with the critics, I think what has happened recently is that the cloud has reached a tipping point. IMO, there are 3 reasons for this tipping point. The first reason is bandwidth. High speed access to the cloud can be taken for granted now for a larger segment of businesses and homes. Unlike a few years ago, today I could probably get high-speed Internet access at a Motel 6 in rural Kansas. The second reason is virtualization technologies that have matured to make cost effective virtual machines on commodity hardware possible. The third and maybe the most important one is that the "big boys" are entering this space in a big way - Google with App Engine and Apps, Amazon with EC2, S3, SDB and SQS, Microsoft with Azure and OfficeLive, and IBM with its massive cloud initiative. These technology leaders are bound to drive innovation and adoption on a massive scale.
So everything about the cloud seems very rosy. Surely there must be some challenges. Of course, there are several.
* Security and Privacy - Security of content in the cloud is a concern for everyone especially for enterprise customers who may hold sensitive customer and business data. Ensuring that the data in the cloud can be accessed only by those authorized is absolutely critical. Also recent compliance requirements such as SOX force more processes around securing data for companies. While security concerns are genuine, there are good solutions. Today companies are storing their critical business data on the cloud e.g. the hundreds of companies that entrust their CRM data to Salesforce.com to be stored along-side data from their competitors (possibly in the same database instance!). Privacy concerns are important for individual consumers who want guarantees that their personal information will not be abused in any way. Ultimately it all comes down to trust, and as the cloud becomes more mainstream, it is likely companies and individuals will become more and more comfortable with most (if not all) of their data on the cloud.
* Vendor Lock-In - This is a major issue that is perhaps the biggest challenge to cloud adoption. With SAAS and PAAS, you are locked into a particular cloud vendor. What if you decide a year later to go elsewhere? There may not be an easy way to migrate to another cloud services provider. At a recent SDForum Cloud Services SIG talk, the term "cloud-neutrality" was mentioned several times. Until standards emerge and get adopted widely by cloud services providers, lock-in is going to a reality.
* Cloud Interoperability - This is going to be an issue for enterprise customers who move to using business applications on the cloud. Consider a company that uses Salesforce.com. Say the company acquires two companies - one using Workday and another using Netsuite. How does the company go about consolidating these three into one? Even a simple single sign-on across all three will be a major challenge leave alone data or business process integration. Web Services standards help to some extent assuming what you need is exposed as Web Services by the SAAS/PAAS vendor, but it does not solve the problem of rationalizing data semantics e.g. how do you rationalize the definition of Customer in one system with that in another? While this problem has always existed even with non-cloud applications from different vendors, with the cloud, the problem is compounded because the cloud applications are complete black-boxes to you and you have no access to the underlying data stores and app infrastructure which are hidden behind the SAAS/PAAS provider's proprietary systems. This has the potential to be a huge problem and perhaps a great opportunity for new companies providing "inter-cloud " migration/integration tools and services.
Collaboration
Once content is in the cloud, it is logical to think of how users can collaborate with each other on that content with the content remaining in the cloud. The last part is key - content that is collaborated on remains in the cloud and is not downloaded to the user's machine. So we are not talking of (say) one user editing a PowerPoint presentation, posting or e-mailing it, and another user pulling it down, making changes and posting or e-mailing another version. We are talking of real-time, concurrent editing of cloud content collaboratively by multiple users. Using PowerPoint as an example, new companies such as SlideRocket (now part of Adobe) and InstaColl offer such collaborative presentation software alternatives to Microsoft PowerPoint. Google offers simple word processing and spreadsheet solutions with Google Docs. Microsoft of course also offers such a service with OfficeLive but it still requires traditional Office applications installed on your machine and is therefore more of a cloud storage solution and less of a collaboration solution as described above.
For those who have struggled with managing and sharing versions of Word, Excel and PowerPoint files, I'm sure the notion of cloud-based collaboration without ever having to work with local versions of files is appealing. However, it remains to be seen if commercial-grade products can deliver a rigorous solution that handles the complexity of managing real-time concurrent updates, merge conflict resolutions, branching and versioning. This is strikingly similar to what revision control systems such as Clearcase and SVN do for source code; and we know the complexities involved with a large group of geographically dispersed developers working on a shared code base. But it is a surmountable problem and some winners are likely to emerge given the huge potential for products and solutions that improve the widely-used office productivity software.
Real-time collaboration with screen sharing, instant messaging, audio and video streams are powerful and familiar tools in collaboration which become even more powerful with the cloud because of cloud content being accessible from anywhere. You could, for example, launch a conference from any machine (or device) and collaborate on content residing in the cloud.
While screen sharing based collaboration is powerful, there is another even more powerful, emerging collaboration technique that I call "gesture replay". Unlike in screen sharing where collaboration is achieved by scraping the pixels off a presenter's screen and sending it across to other participants in the collaboration session, in gesture replay, the presenter's user gesture alone is sent across to all participants. The user gesture is then replayed at each participant's client resulting in the same state transition in each participant's application as the presenter's. Of course each participant has to be running the same client software and the collaboration is limited to only that client software and not the entire desktop. For example, all users connected to a Web site loading a Flash-based application can collaborate inside that application using gesture replay. With gesture replay, the user gesture event being propagated to all participants is very compact and bandwidth efficient because it only involves information about the event and not the application state resulting from execution of the event. This results in very efficient, fast, real-time responses for all participants. With gesture replay, you can get the responsiveness of a native instant messaging application in a screen sharing-type collaboration.
As noted above, gesture replay involves replaying the presenter's gesture in each participant's client. This obviously implies that the logic and knowledge to replay the gesture resides in each client which in turn implies that such clients are powerful applications and not simple Web pages. Further, it also implies that the clients run within powerful runtimes that can efficiently execute events in real-time. This brings us to the third "C" of the three Cs - Client.
But before we move on to the Client, it is worth noting that real-time collaboration with gesture replay has some scalability challenges with HTTP. Proprietary solutions exist today to solve the scalability problem. The upcoming Servlet 3.0 spec (JSR 315) with its suspend-able/resume-able request support will offer a standards-based solution which is likely to promote wider adoption of such collaboration techniques.
Client
I started programming on IBM mainframes using punch cards as input (man, was that painful!). There was no client. Then came dumb terminals. We gradually moved to minicomputers, workstations and PCs. The client-server era gave birth to widespread adoption of business software which had graphical user interfaces and were client-heavy. With the Internet revolution and the Web, the focus shifted back to the server with clients being thin Browser-based. The Web has of course truly revolutionized our lives. However, as we have striven to do more and more with the Browser which was oriented towards rendering static pages, we have struggled to provide users with engaging, rich, high-performance clients. Web 2.0 has moved us a step above where we used to be in this regard, but we are not there yet. We need the Web 2.5 revolution to get us to where we provide more intuitive, interactive, graphical, responsive interfaces to users.
With Web 2.5, the focus shifts back to the client while at the same time being able to maintain a Browser model - the best of both worlds. RIA (Rich Internet Application) technolgies such as Flash, Silverlight and JavaFX are ushering in the Web 2.5 (r)evolution. As this post describes, these RIA technologies provide very efficient and powerful client runtimes (such as the Flash Player), a uniform and consistent target to program to (without having to code for each Browser), and formalized client-side component and event model frameworks to build complex applications containing heavy-duty client-side logic. Such RIAs take advantage of the powerful machines users run their clients on, which traditional DHTM/AJAX applications have not done. A customer once asked us why his iPhone which is at best a 600MHz processor with 128MB RAM can provide such a cool interface for applications while our DHTML enterprise application running on a dual-core 2GHz box with 2GB RAM was so crummy!
The Web 2.5 (r)evolution is well underway. We are beginning to see more and more enterprise scale applications such as those from Workday that take advantage of the Web 2.5 model.
From a collaboration perspective, the Web 2.5 client model facilitates the kind of gesture replay collaboration described earlier. This is due to
a) the emergence of frameworks and tools from companies such as Adobe (for Flash/Flex), Micosoft (for Silverlight, WPF) and SUN (for JavaFX) which allow for users building heavy-duty applications that can handle the gesture event execution logic, and
b) efficient runtimes such as the ActionScript VM (Virtual Machine) for Flash or the Java VM for JavaFX - runtimes featuring JIT compilers, that allow for high performance gesture event execution on the client.
So there it is - the three Cs - Cloud, Collaboration and Client. As developers, we need to start orienting our skill sets towards these exciting new trends.
Cloud
Cloud can be defined as a collection of computing resources made publicly available for access through a standard Internet connection and using well-defined APIs. There are three aspects to the cloud - storage, applications and computing.
Cloud Storage - This is the basic and most common use of the cloud - a place to store electronic content. A classic example of cloud storage is online storage of photographs from your camera with sites such as Picasa Web. Another example is regular online backup of the contents of your computer using services such as Apple's iDisk. Amazon's S3 (Simple Storage Service) is an example of a general purpose cloud storage solution.
Cloud Applications - This is the next level of cloud usage - applications hosted on the cloud that can be used without any software installations on the user's local machine. Saleforce.com is perhaps the poster child for this with their successful hosted CRM (Customer Relationship Management) solution. The term SAAS (Software as a Service) is often used to describe such cloud applications. In the last couple of years we have also seen the emergence of application development frameworks such as Google's App Engine which offer a cloud-based platform for building your own applications that are then hosted on the cloud. The term PAAS (Platform as a Service) is often used to describe these offerings.
Cloud Computing - This is the third and perhaps the newest buzz in the cloud space - computing power available for rent in the cloud. You rent virtual machines online for whatever purposes you need, for however long you need, and pay according to usage. The term IAAS (Infrastructure as a Service) is a term that appropriately describes such services. Amazon's EC2 (Elastic Compute Cloud) is clearly the most prominent player in this space.
So is all the cloud excitement really just hype? Absolutely not. This thing is real and here to stay in a big way. Cloud's value proposition is quite simple and intuitive.
* With Storage, the value proposition is reduced cost, reliability, security, and universal access. I've been using an external hard-drive for backups at home which now roughly costs me the same as using Amazon's S3 (<10GB of data with infrequent transfer in and out of AWS, over a the 5 year lifespan of my drive). With S3, I don't have to worry about my drive failing or getting stolen (my 1TB drive does not have an easy password protection mechanism other than encrypting each file). And of course I cannot access my drive at home from any machine with an Internet connection.
* With Applications, the value proposition is TCO (Total Cost of Ownership), particularly for businesses. Traditional enterprise software has always been a TCO headache for companies that spend a huge amount of their resources installing, maintaining, configuring, upgrading, patching software and the associated hardware. Having someone do that for you cheaper taking advantage of economies of scale makes perfect economic sense. It also makes good business sense because it lets companies focus more on their core business.
* With Computing, the value proposition is cost, flexibility and reliability. Renting computing resources and paying as you go based on usage rather than having to buy, install and maintain hardware and systems software has both cost and reliability implications. I like to use the electric utility grid analogy here - buying electricity from the utility grid works out much cheaper and reliable than every home having its own electric power generator. The additional benefit is the flexibility of being able to scale up or down the computing resources you need as your loads fluctuate. Also, you can pick and choose from a variety of hardware and software configurations at any time. This is a boon for small startups that don't have to buy hardware and can rent just what they need or afford, scaling as they grow. In fact, it is possible to start a purely virtual company which does not own a single server box.
Some critics argue that all the elements of cloud have been around for sometime now and don't understand why there is all this hype suddenly as if a new technology revolution has taken place. While I agree partially with the critics, I think what has happened recently is that the cloud has reached a tipping point. IMO, there are 3 reasons for this tipping point. The first reason is bandwidth. High speed access to the cloud can be taken for granted now for a larger segment of businesses and homes. Unlike a few years ago, today I could probably get high-speed Internet access at a Motel 6 in rural Kansas. The second reason is virtualization technologies that have matured to make cost effective virtual machines on commodity hardware possible. The third and maybe the most important one is that the "big boys" are entering this space in a big way - Google with App Engine and Apps, Amazon with EC2, S3, SDB and SQS, Microsoft with Azure and OfficeLive, and IBM with its massive cloud initiative. These technology leaders are bound to drive innovation and adoption on a massive scale.
So everything about the cloud seems very rosy. Surely there must be some challenges. Of course, there are several.
* Security and Privacy - Security of content in the cloud is a concern for everyone especially for enterprise customers who may hold sensitive customer and business data. Ensuring that the data in the cloud can be accessed only by those authorized is absolutely critical. Also recent compliance requirements such as SOX force more processes around securing data for companies. While security concerns are genuine, there are good solutions. Today companies are storing their critical business data on the cloud e.g. the hundreds of companies that entrust their CRM data to Salesforce.com to be stored along-side data from their competitors (possibly in the same database instance!). Privacy concerns are important for individual consumers who want guarantees that their personal information will not be abused in any way. Ultimately it all comes down to trust, and as the cloud becomes more mainstream, it is likely companies and individuals will become more and more comfortable with most (if not all) of their data on the cloud.
* Vendor Lock-In - This is a major issue that is perhaps the biggest challenge to cloud adoption. With SAAS and PAAS, you are locked into a particular cloud vendor. What if you decide a year later to go elsewhere? There may not be an easy way to migrate to another cloud services provider. At a recent SDForum Cloud Services SIG talk, the term "cloud-neutrality" was mentioned several times. Until standards emerge and get adopted widely by cloud services providers, lock-in is going to a reality.
* Cloud Interoperability - This is going to be an issue for enterprise customers who move to using business applications on the cloud. Consider a company that uses Salesforce.com. Say the company acquires two companies - one using Workday and another using Netsuite. How does the company go about consolidating these three into one? Even a simple single sign-on across all three will be a major challenge leave alone data or business process integration. Web Services standards help to some extent assuming what you need is exposed as Web Services by the SAAS/PAAS vendor, but it does not solve the problem of rationalizing data semantics e.g. how do you rationalize the definition of Customer in one system with that in another? While this problem has always existed even with non-cloud applications from different vendors, with the cloud, the problem is compounded because the cloud applications are complete black-boxes to you and you have no access to the underlying data stores and app infrastructure which are hidden behind the SAAS/PAAS provider's proprietary systems. This has the potential to be a huge problem and perhaps a great opportunity for new companies providing "inter-cloud " migration/integration tools and services.
Collaboration
Once content is in the cloud, it is logical to think of how users can collaborate with each other on that content with the content remaining in the cloud. The last part is key - content that is collaborated on remains in the cloud and is not downloaded to the user's machine. So we are not talking of (say) one user editing a PowerPoint presentation, posting or e-mailing it, and another user pulling it down, making changes and posting or e-mailing another version. We are talking of real-time, concurrent editing of cloud content collaboratively by multiple users. Using PowerPoint as an example, new companies such as SlideRocket (now part of Adobe) and InstaColl offer such collaborative presentation software alternatives to Microsoft PowerPoint. Google offers simple word processing and spreadsheet solutions with Google Docs. Microsoft of course also offers such a service with OfficeLive but it still requires traditional Office applications installed on your machine and is therefore more of a cloud storage solution and less of a collaboration solution as described above.
For those who have struggled with managing and sharing versions of Word, Excel and PowerPoint files, I'm sure the notion of cloud-based collaboration without ever having to work with local versions of files is appealing. However, it remains to be seen if commercial-grade products can deliver a rigorous solution that handles the complexity of managing real-time concurrent updates, merge conflict resolutions, branching and versioning. This is strikingly similar to what revision control systems such as Clearcase and SVN do for source code; and we know the complexities involved with a large group of geographically dispersed developers working on a shared code base. But it is a surmountable problem and some winners are likely to emerge given the huge potential for products and solutions that improve the widely-used office productivity software.
Real-time collaboration with screen sharing, instant messaging, audio and video streams are powerful and familiar tools in collaboration which become even more powerful with the cloud because of cloud content being accessible from anywhere. You could, for example, launch a conference from any machine (or device) and collaborate on content residing in the cloud.
While screen sharing based collaboration is powerful, there is another even more powerful, emerging collaboration technique that I call "gesture replay". Unlike in screen sharing where collaboration is achieved by scraping the pixels off a presenter's screen and sending it across to other participants in the collaboration session, in gesture replay, the presenter's user gesture alone is sent across to all participants. The user gesture is then replayed at each participant's client resulting in the same state transition in each participant's application as the presenter's. Of course each participant has to be running the same client software and the collaboration is limited to only that client software and not the entire desktop. For example, all users connected to a Web site loading a Flash-based application can collaborate inside that application using gesture replay. With gesture replay, the user gesture event being propagated to all participants is very compact and bandwidth efficient because it only involves information about the event and not the application state resulting from execution of the event. This results in very efficient, fast, real-time responses for all participants. With gesture replay, you can get the responsiveness of a native instant messaging application in a screen sharing-type collaboration.
As noted above, gesture replay involves replaying the presenter's gesture in each participant's client. This obviously implies that the logic and knowledge to replay the gesture resides in each client which in turn implies that such clients are powerful applications and not simple Web pages. Further, it also implies that the clients run within powerful runtimes that can efficiently execute events in real-time. This brings us to the third "C" of the three Cs - Client.
But before we move on to the Client, it is worth noting that real-time collaboration with gesture replay has some scalability challenges with HTTP. Proprietary solutions exist today to solve the scalability problem. The upcoming Servlet 3.0 spec (JSR 315) with its suspend-able/resume-able request support will offer a standards-based solution which is likely to promote wider adoption of such collaboration techniques.
Client
I started programming on IBM mainframes using punch cards as input (man, was that painful!). There was no client. Then came dumb terminals. We gradually moved to minicomputers, workstations and PCs. The client-server era gave birth to widespread adoption of business software which had graphical user interfaces and were client-heavy. With the Internet revolution and the Web, the focus shifted back to the server with clients being thin Browser-based. The Web has of course truly revolutionized our lives. However, as we have striven to do more and more with the Browser which was oriented towards rendering static pages, we have struggled to provide users with engaging, rich, high-performance clients. Web 2.0 has moved us a step above where we used to be in this regard, but we are not there yet. We need the Web 2.5 revolution to get us to where we provide more intuitive, interactive, graphical, responsive interfaces to users.
With Web 2.5, the focus shifts back to the client while at the same time being able to maintain a Browser model - the best of both worlds. RIA (Rich Internet Application) technolgies such as Flash, Silverlight and JavaFX are ushering in the Web 2.5 (r)evolution. As this post describes, these RIA technologies provide very efficient and powerful client runtimes (such as the Flash Player), a uniform and consistent target to program to (without having to code for each Browser), and formalized client-side component and event model frameworks to build complex applications containing heavy-duty client-side logic. Such RIAs take advantage of the powerful machines users run their clients on, which traditional DHTM/AJAX applications have not done. A customer once asked us why his iPhone which is at best a 600MHz processor with 128MB RAM can provide such a cool interface for applications while our DHTML enterprise application running on a dual-core 2GHz box with 2GB RAM was so crummy!
The Web 2.5 (r)evolution is well underway. We are beginning to see more and more enterprise scale applications such as those from Workday that take advantage of the Web 2.5 model.
From a collaboration perspective, the Web 2.5 client model facilitates the kind of gesture replay collaboration described earlier. This is due to
a) the emergence of frameworks and tools from companies such as Adobe (for Flash/Flex), Micosoft (for Silverlight, WPF) and SUN (for JavaFX) which allow for users building heavy-duty applications that can handle the gesture event execution logic, and
b) efficient runtimes such as the ActionScript VM (Virtual Machine) for Flash or the Java VM for JavaFX - runtimes featuring JIT compilers, that allow for high performance gesture event execution on the client.
So there it is - the three Cs - Cloud, Collaboration and Client. As developers, we need to start orienting our skill sets towards these exciting new trends.
Labels:
Cloud,
Collaboration,
Flash,
JavaFX,
RIA,
Silverlight,
Software Architecture
Monday, December 8, 2008
3-Stage Software Testing
One of the 7 Habits of Highly Effective Software Developers relates to Unit Testing. Software Unit Testing is one of the most important but least popular and poorly employed aspects of software development. Often we developers put too much onus on QA (Quality Assurance) teams to test software for bugs. Unfortunately QA's role is only to test things that are not possible for every developer to test e.g. making sure the software runs on all versions of all Browsers or OSs.
The 7 Habits talks of considering unit tests after completion of every quarter of your task and having a test completed by the end of the task. When you write a test, how should it be written? Should it be a full fledged test that can be run and verified with automated testing frameworks or should it be informal tests? While the answer may vary with the size of development groups and the complexity of applications, I like to think of 3 stages in writing unit tests - stages which progressively lead to complete tests.
Stage 1: Crash Test - The first stage is to write a test that can simply verify that things don't crash i.e. no unexpected exceptions are thrown and no abnormal conditions occur. Here's a simple test that invokes a service on another server that returns a value object. Successful running of this test merely ensures that nothing is fundamentally broken.
@Test
public void testReport() throws M13Exception
{
Report report = (Report)svs.getReport("foo");
}
Stage 2: Eyeball Test - The next stage is to examine the results of invoking the functionality being tested. In the example test below, the contents of the
@Test
public void testReport() throws M13Exception
{
Report report = (Report)svs.getReport("foo");
logger.info("Report: " + report.toXML());
}
Stage 3: Automated Result-Comparison Test - This is the final stage in writing a test and extends the Eyeball Test stage. In this stage, you write code that compares the results of invoking the functionality being tested with pre-defined, expected results. In the example below, the XML representation of the
@Test
public void testReport() throws M13Exception
{
Report report = (Report)svs.getReport("foo");
String fooXML = report.toXML();
logger.info("Report: " + fooXML);
// read from data file to compare results with
String reportXML = FileUtils.readFileToString(file, "UTF-8");
assertTrue(fooXML.equals(reportXML));
}
One of the common mistakes people make is to attempt to get to the third stage directly right from the beginning. While this may work for some simpler tasks, for more complex tasks and software, this results in constantly having to change the data set being compared against since it is likely that the definition of the data is changing continuously until the later stages of development.
So the best practice may be to ensure you have Crash tests and Eyeball tests in place after each task (making sure you at least think of testing at each quarter of the task). If the data sets are stable enough, you can also write Automated Result-Comparison tests at this stage. If not, come back to this stage later in the development cycle.
Note that for all stages you employ a formal testing framework such as JUnit. So you start right from scratch with a formalized test that can be run in an automated fashion even if you are only at the Crash Test Stage. This way you are simply expanding on and extending the tests at each stage.
Happy testing! ... and give those QA guys a break!
The 7 Habits talks of considering unit tests after completion of every quarter of your task and having a test completed by the end of the task. When you write a test, how should it be written? Should it be a full fledged test that can be run and verified with automated testing frameworks or should it be informal tests? While the answer may vary with the size of development groups and the complexity of applications, I like to think of 3 stages in writing unit tests - stages which progressively lead to complete tests.
Stage 1: Crash Test - The first stage is to write a test that can simply verify that things don't crash i.e. no unexpected exceptions are thrown and no abnormal conditions occur. Here's a simple test that invokes a service on another server that returns a value object. Successful running of this test merely ensures that nothing is fundamentally broken.
@Test
public void testReport() throws M13Exception
{
Report report = (Report)svs.getReport("foo");
}
Stage 2: Eyeball Test - The next stage is to examine the results of invoking the functionality being tested. In the example test below, the contents of the
Report
object returned from the server is printed to a log output. What this test lets you do is to "eyeball" the results printed to get some sense of whether the functionality is working right or not. This stage is an extension to the Crash Test stage.@Test
public void testReport() throws M13Exception
{
Report report = (Report)svs.getReport("foo");
logger.info("Report: " + report.toXML());
}
Stage 3: Automated Result-Comparison Test - This is the final stage in writing a test and extends the Eyeball Test stage. In this stage, you write code that compares the results of invoking the functionality being tested with pre-defined, expected results. In the example below, the XML representation of the
Report
object got from the server is compared against a XML data file; the test passes if the XML matches the XML in the data file and fails otherwise. Once this stage is done, the test is complete.@Test
public void testReport() throws M13Exception
{
Report report = (Report)svs.getReport("foo");
String fooXML = report.toXML();
logger.info("Report: " + fooXML);
// read from data file to compare results with
String reportXML = FileUtils.readFileToString(file, "UTF-8");
assertTrue(fooXML.equals(reportXML));
}
One of the common mistakes people make is to attempt to get to the third stage directly right from the beginning. While this may work for some simpler tasks, for more complex tasks and software, this results in constantly having to change the data set being compared against since it is likely that the definition of the data is changing continuously until the later stages of development.
So the best practice may be to ensure you have Crash tests and Eyeball tests in place after each task (making sure you at least think of testing at each quarter of the task). If the data sets are stable enough, you can also write Automated Result-Comparison tests at this stage. If not, come back to this stage later in the development cycle.
Note that for all stages you employ a formal testing framework such as JUnit. So you start right from scratch with a formalized test that can be run in an automated fashion even if you are only at the Crash Test Stage. This way you are simply expanding on and extending the tests at each stage.
Happy testing! ... and give those QA guys a break!
Thursday, November 27, 2008
Adobe MAX 2008
Highlights of the Adobe MAX conference in San Francisco, Nov 2008
Flash Player 10
Flash 10's new text engine ("Vellum") provides advanced text rendering capabilities. Layouts such as text wrapping around images, which is very easy to do in HTML, is now supported in Flash 10. Bi-directional (R-to-L) languages are natively supported. Text rotation does not require embedded fonts in 10. New York times demo-ed a new AIR-based news reader application that takes advantage of Flash 10's new text capabilities. CS4 and Flex Gumbo components use the new text engine which is written entirely in ActionScript.
The ability to allow search engines to index/search content inside a SWF file was unveiled. While it requires some extra work, the fact that this now possible is significant.
The new text and search capabilities address what used to be one of the major drawbacks of Flash for text-oriented content. One wonders if Flash 10 will usher in a new era where we will see more traditional consumer-facing Web sites adopting Flash.
One intriguing announcement was the "Alchemy" project which allows your C++ code to run in the Flash player. This project provides libraries that can be used to generate ActionScript code from C++ which can then run in the Flash Player. A demo of OpenSSL code ported to run in Flash was very interesting. Encryption may be a good example of established, existing C++ code that one may want to run in the Flash Player.
Another cool demo was the new feature that enables direct Player to Player communication using the new RTMFP (Real Time Media Flow Protocol) protocol. This is peer-to-peer communication but does not support file or document sharing. It requires a server for the initial connection hookup between peers.
Flash Catalyst (formerly known as Thermo)
This is a new design tool that allows designers to import creative graphics content from Photoshop or Fireworks and turn them into real application components. The designers can control the look and feel of the application including state transition animations. The output of Catalyst is a "FXP" package that can then be imported into Flex by a developer and used directly.
Catalyst leverages the new XML-based FXG graphics language. This language lets you declaratively define look and feel of components - effectively this is doing programmatic skinning without writing code. Hopefully this will reduce the need for graphical skinning and reduce the proliferation of image assets for skinning and leverage all the advantages of programmatic skinning.
Couple of concerns I have about Catalyst are 1) Will this end up in "XML hell" with projects containing hundreds of FXG XML files, and 2) Will round-tripping between Catalyst and Flex (designer and developer) really work well for large commercial-grade projects?
Flex Gumbo (Flex 4)
This is the next generation of Flex which was previewed and is due to be released next year. The main feature seems to be the new MVC separation at the component level. This ties in with Catalyst - the separation allows for designers to have more control over the look and feel of components using the Catalyst tool. Other than this separation for Catalyst, there does not seem to be anything of major significance. There is some new ColdFusion integration but that is of no use to those of us who don't live in the proprietary world of ColdFusion.
A disappointment for me was that there is no change to facilitate runtime changing of CSS. CSS files are still compiled into the SWF. The same applies to FXG files. If you need to be able to change the CSS/FXG files after deployment of your application at the customer site, it requires custom code to be written to allow this.
AIR
There seems to be more buzz around AIR with several well attended sessions around this technology. The application that won the top award in the Enterprise category was a trading desktop AIR application from NASDAQ.
Adobe announced the new "Wave" AIR application for desktop notification. Using this application, users can get desktop notifications from (say) their favorite social networking sites without having to browse to each site to look for anything new. There is a notification service in the back-end that Adobe hosts and that content producers integrate with.
Sneak preview of "Nitro" widgets provided glimpses of a formalized framework for building widgets that can run on multiple screens. Drag and drop of widgets from the Browser to the desktop reminded me of the JavaFX demos of similar functionality. The "Durango" preview showcased AIR mashups - dragging and dropping application components to weave together a composite AIR app.
AIR, with its platform independence, local database, WebKit embedded Browser, JavaScript and ActionScript scripting capabilities, and of course the Flash Player runtime, seems poised to be a powerful channel for widgets/gadgets. It is possible that AIR on the desktop could be as ubiquitous as Flash in the Browser. Interestingly, downloading the free Adobe Reader 9 also installs AIR by default showing Adobe's desire to try and get AIR on as many desktops as Reader. Smart strategy!
PDF
Acrobat sessions introduced new features of version 9 released recently. The most interesting feature to me was the new embedded Flash player in Reader/Acrobat and better support for a SWF file running inside PDF. It is possible to script JavaScript to interact with the SWF within the PDF. Adobe is moving Acrobat from being a viewer for static documents to a full-blown application platform. Given the popularity of Reader on desktops, this is very promising. However, the ability to debug/profile code in Acrobat needs to be improved if it needs to really be an application platform.
One wonders if Adobe will start blurring the lines between its desktop runtimes - AIR, Flash Desktop Player and Reader.
Business Intelligence
SAP Business Objects had a demo booth where they were showcasing their XCelsius product line. A new demo that I haven't seen before was the ability to export a dashboard to PDF that contains a SWF providing rich interactive capabilities. See demo here on their Web site. Another new demo was their use of AIR for desktop widgets/gadgets delivering BI content. It ships a component SDK that can be used to put together custom dashboards - this is oriented towards IT and consultants rather than the end user. XCelsius uses a home-grown server-side SWF generation/manipulation library.
XCelsius is clearly ahead of the competition in terms of taking advantage of Adobe technologies such as Flash, Flex, AIR and PDF in its BI suite. BI aside, SAP in general seems to have a solid partnership with Adobe. Adobe's PDF generation libraries are baked into all of SAP's OLTP products. They use Adobe Connect Web Conferencing product for their training solutions.
ILOG (soon to be part of IBM) demo-ed their Flash/Flex-based visualization library Elixir. ILOG does have a history of experience in the advanced visualization space although their traditional forte is optimization and business rules. The library is quite rich including Charts, Maps, Org Charts, Gantt Charts, Treemaps and Gauges (they don't have Pivot table). Their library is re-sold by Adobe for $799 per license, but the catch is the hefty deployment fees/royalties which reportedly could cost tens of thousands of dollars a year.
Cocomo
This is a new Platform as a Service offering for building real-time collaboration capabilities leveraging Acrobat.com cloud services and the Connect infrastructure. Cocomo promises to make it easy to add chat, live file sharing, screen sharing, white-boarding and VOIP audio to applications. Given the high quality of Adobe's Connect product, Cocomo is likely to be a winner.
Mobile
While CTO Kevin Lynch demo-ed a number of phones running Flash, the big one was missing - iPhone. Kevin held up an iPhone and said Flash on it was still baking in the oven and that Apple's chief taster hadn't approved it yet! Looks like the obstacle for getting Flash on the iPhone is not a technical one but a business one. Apple is concerned that allowing Flash on the iPhone will result in a Adobe monopoly on the default channel for content and applications with sizzle. Apple however is doing a great disservice to developers because there is a phenomenal advantage to coding with one platform for all screens although you are likely to build the same application differently (scaled down) for a mobile device. I'm still in the process of picking up Objective C, XCode etc. which I need to build an iPhone app. I wish I could write the same ActionScript for the iPhone that I write for the Browser, AIR and PDF.
Miscellaneous
Some random thoughts from the conference:
Flash Player 10
Flash 10's new text engine ("Vellum") provides advanced text rendering capabilities. Layouts such as text wrapping around images, which is very easy to do in HTML, is now supported in Flash 10. Bi-directional (R-to-L) languages are natively supported. Text rotation does not require embedded fonts in 10. New York times demo-ed a new AIR-based news reader application that takes advantage of Flash 10's new text capabilities. CS4 and Flex Gumbo components use the new text engine which is written entirely in ActionScript.
The ability to allow search engines to index/search content inside a SWF file was unveiled. While it requires some extra work, the fact that this now possible is significant.
The new text and search capabilities address what used to be one of the major drawbacks of Flash for text-oriented content. One wonders if Flash 10 will usher in a new era where we will see more traditional consumer-facing Web sites adopting Flash.
One intriguing announcement was the "Alchemy" project which allows your C++ code to run in the Flash player. This project provides libraries that can be used to generate ActionScript code from C++ which can then run in the Flash Player. A demo of OpenSSL code ported to run in Flash was very interesting. Encryption may be a good example of established, existing C++ code that one may want to run in the Flash Player.
Another cool demo was the new feature that enables direct Player to Player communication using the new RTMFP (Real Time Media Flow Protocol) protocol. This is peer-to-peer communication but does not support file or document sharing. It requires a server for the initial connection hookup between peers.
Flash Catalyst (formerly known as Thermo)
This is a new design tool that allows designers to import creative graphics content from Photoshop or Fireworks and turn them into real application components. The designers can control the look and feel of the application including state transition animations. The output of Catalyst is a "FXP" package that can then be imported into Flex by a developer and used directly.
Catalyst leverages the new XML-based FXG graphics language. This language lets you declaratively define look and feel of components - effectively this is doing programmatic skinning without writing code. Hopefully this will reduce the need for graphical skinning and reduce the proliferation of image assets for skinning and leverage all the advantages of programmatic skinning.
Couple of concerns I have about Catalyst are 1) Will this end up in "XML hell" with projects containing hundreds of FXG XML files, and 2) Will round-tripping between Catalyst and Flex (designer and developer) really work well for large commercial-grade projects?
Flex Gumbo (Flex 4)
This is the next generation of Flex which was previewed and is due to be released next year. The main feature seems to be the new MVC separation at the component level. This ties in with Catalyst - the separation allows for designers to have more control over the look and feel of components using the Catalyst tool. Other than this separation for Catalyst, there does not seem to be anything of major significance. There is some new ColdFusion integration but that is of no use to those of us who don't live in the proprietary world of ColdFusion.
A disappointment for me was that there is no change to facilitate runtime changing of CSS. CSS files are still compiled into the SWF. The same applies to FXG files. If you need to be able to change the CSS/FXG files after deployment of your application at the customer site, it requires custom code to be written to allow this.
AIR
There seems to be more buzz around AIR with several well attended sessions around this technology. The application that won the top award in the Enterprise category was a trading desktop AIR application from NASDAQ.
Adobe announced the new "Wave" AIR application for desktop notification. Using this application, users can get desktop notifications from (say) their favorite social networking sites without having to browse to each site to look for anything new. There is a notification service in the back-end that Adobe hosts and that content producers integrate with.
Sneak preview of "Nitro" widgets provided glimpses of a formalized framework for building widgets that can run on multiple screens. Drag and drop of widgets from the Browser to the desktop reminded me of the JavaFX demos of similar functionality. The "Durango" preview showcased AIR mashups - dragging and dropping application components to weave together a composite AIR app.
AIR, with its platform independence, local database, WebKit embedded Browser, JavaScript and ActionScript scripting capabilities, and of course the Flash Player runtime, seems poised to be a powerful channel for widgets/gadgets. It is possible that AIR on the desktop could be as ubiquitous as Flash in the Browser. Interestingly, downloading the free Adobe Reader 9 also installs AIR by default showing Adobe's desire to try and get AIR on as many desktops as Reader. Smart strategy!
Acrobat sessions introduced new features of version 9 released recently. The most interesting feature to me was the new embedded Flash player in Reader/Acrobat and better support for a SWF file running inside PDF. It is possible to script JavaScript to interact with the SWF within the PDF. Adobe is moving Acrobat from being a viewer for static documents to a full-blown application platform. Given the popularity of Reader on desktops, this is very promising. However, the ability to debug/profile code in Acrobat needs to be improved if it needs to really be an application platform.
One wonders if Adobe will start blurring the lines between its desktop runtimes - AIR, Flash Desktop Player and Reader.
Business Intelligence
SAP Business Objects had a demo booth where they were showcasing their XCelsius product line. A new demo that I haven't seen before was the ability to export a dashboard to PDF that contains a SWF providing rich interactive capabilities. See demo here on their Web site. Another new demo was their use of AIR for desktop widgets/gadgets delivering BI content. It ships a component SDK that can be used to put together custom dashboards - this is oriented towards IT and consultants rather than the end user. XCelsius uses a home-grown server-side SWF generation/manipulation library.
XCelsius is clearly ahead of the competition in terms of taking advantage of Adobe technologies such as Flash, Flex, AIR and PDF in its BI suite. BI aside, SAP in general seems to have a solid partnership with Adobe. Adobe's PDF generation libraries are baked into all of SAP's OLTP products. They use Adobe Connect Web Conferencing product for their training solutions.
ILOG (soon to be part of IBM) demo-ed their Flash/Flex-based visualization library Elixir. ILOG does have a history of experience in the advanced visualization space although their traditional forte is optimization and business rules. The library is quite rich including Charts, Maps, Org Charts, Gantt Charts, Treemaps and Gauges (they don't have Pivot table). Their library is re-sold by Adobe for $799 per license, but the catch is the hefty deployment fees/royalties which reportedly could cost tens of thousands of dollars a year.
Cocomo
This is a new Platform as a Service offering for building real-time collaboration capabilities leveraging Acrobat.com cloud services and the Connect infrastructure. Cocomo promises to make it easy to add chat, live file sharing, screen sharing, white-boarding and VOIP audio to applications. Given the high quality of Adobe's Connect product, Cocomo is likely to be a winner.
Mobile
While CTO Kevin Lynch demo-ed a number of phones running Flash, the big one was missing - iPhone. Kevin held up an iPhone and said Flash on it was still baking in the oven and that Apple's chief taster hadn't approved it yet! Looks like the obstacle for getting Flash on the iPhone is not a technical one but a business one. Apple is concerned that allowing Flash on the iPhone will result in a Adobe monopoly on the default channel for content and applications with sizzle. Apple however is doing a great disservice to developers because there is a phenomenal advantage to coding with one platform for all screens although you are likely to build the same application differently (scaled down) for a mobile device. I'm still in the process of picking up Objective C, XCode etc. which I need to build an iPhone app. I wish I could write the same ActionScript for the iPhone that I write for the Browser, AIR and PDF.
Miscellaneous
Some random thoughts from the conference:
- One way to feel the pulse of the market and understand trends is to look at which sessions at a conference are the most popular ones. At MAX the most popular and sold-out sessions included (in no particular order) CS4, Search-able SWF, Flex Introduction and ColdFusion. The majority of attendees at the conference seemed to be folks producing creative content using CS4 products and/or folks building Web sites using ColdFusion. CS4 remains Adobe's flagship product suite and hugely popular. I was quite surprised by the fact that ColdFusion, which I had assumed to be legacy stuff, is alive and well. Interestingly ColdFusion seems to enjoy a cult following among its users who don't seem to care that it is a proprietary platform. The popularity of the Search-able SWF session shows that consumer-facing Web sites using Flash are very keen on a solution that lets search engines get to content within their SWFs. This may have been the single biggest concern of Web sites using Flash for whom search hits are important. The fact that Flex introductory sessions were so popular shows that while Flex is gaining a lot of popularity, it is still an early-stage technology that more people are beginning to discover.
- Google's session on Flash support in Google Maps was interesting not just from the perspective of users now being able to use maps natively in their Flash/Flex applications, but for what Google mentioned were the advantages of Flash over JavaScript-based Maps solution - 1) performance advantage of Flash in manipulating 5-10K vertex polygons in real-time and 3D; 2) vector graphics drawing APIs of Flash to be able to do specialized markers such as a rectangle with glow filters or video markers overlay-ed on Maps.
- Scrapblog is a company that won an award in the RIA category. Scrapblog is a tool for creating online multimedia scrapbooks and is an example of what still remains the primary use of Flash - creative design with graphical and audio/video content.
Sunday, September 28, 2008
JavaFX - Applets Part II
SUN entered the RIA foray with its announcement at JavaOne of the JavaFX Rich Client Technology - "a family of products for creating RIAs with immersive media and content across all screens of your life". JavaFX technology includes the following:
The JavaFX technology for RIAs is conceptually similar to Adobe's Flex and Microsoft's Silverlight in that it has the following key ingredients:
One interesting aspect of this technology is the new language - JavaFX Script. JavaFX Script is a declarative, statically typed language for defining GUIs and application behavior. It provides data binding capabilities for synchronizing UI element state with application data. It is possible to call Java code from JavaFX Script making it easy to leverage existing Java APIs such as the Swing toolkit. It is unclear if we will have something analogous to the Flex-AJAX bridge for JavaFX Script-to-JavaScript communication. With JavaFX Script you can do true multi-threaded applications but I am reminded of Swing developers often shooting themselves in the foot with UI update and the event-dispatch thread. JavaFX Script is compiled to JVM bytecode providing superior execution performance. Some other academically interesting features of the language are described later in this post.
JavaFX Script can be compared to Adobe's MXML/ActionScript and Microsoft's XAML/C#. However, unlike MXML and XAML, JavaFX does not have an XML description of the application UI tree. The UI tree description and the business logic behavior is combined into JavaFX scripts. As a developer looking at Flex and Silverlight to produce commercial software, the first reaction to JavaFX Script is - oh no, another language! Why not just do what Adobe did with ActionScript? - adopt the ECMA Script standard and enhance it where needed. While the ability to work with Java from within JavaFX Script is appealing to Java developers, ActionScript's object oriented-ness and semantics is also quite natural for a Java developer. I wonder if SUN could have served developers better by sticking with the ECMA Script standard which is the basis for ActionScript and JavaScript, and extended it for supporting Java invocation and any other features as needed.
Some other features of the JavaFX technology include:
Overall, the JavaFX technology is very promising and will help usher in the new Web 2.5 era faster. If SUN plays its cards right, JavaFX may catch up with Silverlight and maybe even with Flash/Flex someday.
- A runtime plugin similar to Adobe's Flash Player
- Development SDK including IDE plugins, compiler and debugger
- A new declarative scripting language for building UIs called JavaFX Script
The JavaFX technology for RIAs is conceptually similar to Adobe's Flex and Microsoft's Silverlight in that it has the following key ingredients:
- A formalized client-side component and event model which is absolutely essential for any RIA with heavy client-side business logic, and which is sorely lacking in the DHTML/AJAX/JavaScript world
- A JIT-compiled Virtual Machine runtime that is far more efficient than interpreted JavaScript in the Browser
- A development SDK with editors, designers, debuggers and profilers
- Built-in support for hardware acceleration and superior vector graphics with a library that supports animations and effects out-of-the-box
- Size of the plugin core needed for typical applications is only a few MB
- Grahics capabilities look much more refined, competing with Flash and Silverlight UIs
- Installation, configuration and launch may be easier with the deployment toolkit and JNLP support
- Runtime performance has been improved
One interesting aspect of this technology is the new language - JavaFX Script. JavaFX Script is a declarative, statically typed language for defining GUIs and application behavior. It provides data binding capabilities for synchronizing UI element state with application data. It is possible to call Java code from JavaFX Script making it easy to leverage existing Java APIs such as the Swing toolkit. It is unclear if we will have something analogous to the Flex-AJAX bridge for JavaFX Script-to-JavaScript communication. With JavaFX Script you can do true multi-threaded applications but I am reminded of Swing developers often shooting themselves in the foot with UI update and the event-dispatch thread. JavaFX Script is compiled to JVM bytecode providing superior execution performance. Some other academically interesting features of the language are described later in this post.
JavaFX Script can be compared to Adobe's MXML/ActionScript and Microsoft's XAML/C#. However, unlike MXML and XAML, JavaFX does not have an XML description of the application UI tree. The UI tree description and the business logic behavior is combined into JavaFX scripts. As a developer looking at Flex and Silverlight to produce commercial software, the first reaction to JavaFX Script is - oh no, another language! Why not just do what Adobe did with ActionScript? - adopt the ECMA Script standard and enhance it where needed. While the ability to work with Java from within JavaFX Script is appealing to Java developers, ActionScript's object oriented-ness and semantics is also quite natural for a Java developer. I wonder if SUN could have served developers better by sticking with the ECMA Script standard which is the basis for ActionScript and JavaScript, and extended it for supporting Java invocation and any other features as needed.
Some other features of the JavaFX technology include:
- Ability to run an application both in the Browser and on the desktop. In fact you could drag an application out of the Browser and drop it on to your desktop and continue working as a full desktop application. JavaFX applications run in a separate process. Conceivable a JavaFX application crash will not crash the browser - along the same lines of tabs in Google's Chrome Browser running in separate process spaces.
- JavaFX applications communicate with the server back-end either through Web Services or through RMI. Web Services are extremely useful; but for heavy-duty applications, more efficient mechanisms such as AMF3 over HTTP that Flash/Flex supports is needed (see this). Perhaps someone could implement AMF3 over HTTP support to JavaFX as an open-source project. RMI has issues with firewalls requiring complex HTTP-tunneling solutions.
- Mobile and TV platform support coming in the future
- Classes have attributes (analogous to fields in Java), functions and operations (analogous to methods in Java)
- The distinction between functions and operations is interesting. One wonders if it adds additional complexity and could have been combined into one construct. Functions are for implementing simple business logic while operations are for complex logic with exception handling. More interestingly, return values of functions are always re-evaluated when the value of any variable referenced by the function changes. This is useful with data binding described below. On a data-binding related note, functions are independent entities that don't need to be associated with a class - this is more of a procedural concept rather than object-oriented one. While the separation of state and behavior is valid on the server for SOA (see this), applying the concept on the client feels weird although it may make sense from a data binding perspective.
- Function and Operation definitions are outside of the class declaration. The class declaration just contains the declaration of the function/operation. Similarly, attribute initialization is defined outside of the class declaration. This feels more C++-ish rather than Java-ish.
- Data binding is a useful feature for UI development. An attribute can be defined to be bound to another attribute or to a function so that its value is dynamically (and optionally lazily) updated whenever the bound target value changes.
- Blocks of code within a "do" or "do later" block execute in a separate thread. While Java developers are familiar with convenient multi-threading mechanisms available in Java, this is the first time a scripting language has provided multi-threading capabilities. While this is a great advantage, it also opens the door for pitfalls for average developers as multi-threading in inherently hard to design and implement correctly for complex applications.
- Triggers is an exciting feature where you can write code that kicks in on state change events such as an attribute value change. This is not unlike database triggers that developers are familiar with.
- Attributes can have cardinality operators such as *, ?, + (think regex) to indicate optional, one or more, and zero or more.
- An attribute can have an inverse declaration indicating a bi-directional relationship with another attribute with automatic update of the inverse on attribute change. This feels like something one does in ORM (Object Relational Mapping) with persistence to a datastore. It seems out of place in a UI development-oriented scripting language.
- Arrays with "list comprehensions" seem extremely useful. This is like writing SQL statements on lists e.g.
var a:Integer* = select n*n from n in [1..10] where (n%2 == 0);
New keywords such asinsert as first, into, before
andafter
facilitate useful list manipulations. - Object literals allow for creating object instances in a JSON-like fashion. While this seems useful, it is likely to lead to very verbose and difficult to read code - see this TableNodeExampleApplet.fx example from a JavaFXpert.
- Multiple inheritance, which Java avoided due to its inherent complexity, is supported by JavaFX Script.
Overall, the JavaFX technology is very promising and will help usher in the new Web 2.5 era faster. If SUN plays its cards right, JavaFX may catch up with Silverlight and maybe even with Flash/Flex someday.
Monday, August 25, 2008
Flex TabNavigator Image Snapshot Icons
Example files below illustrate generating thumbnail snapshots of tabs in a Flex TabNavigator. The thumbnails are shows as tab icons.
See this flexcoders thread.
ImgSnapshot.mxml - Main Application File:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" creationComplete="init()">
<mx:Script>
<![CDATA[
import mx.events.IndexChangedEvent;
import mx.containers.TabNavigator;
private var tabNav:TabNavigator = new TabNavigator();
private function init():void
{
// tab1
var tabChild1:MyComponent = new MyComponent(0);
this.tabNav.addChild(tabChild1);
// tab2
var tabChild2:MyComponent = new MyComponent(1);
this.tabNav.addChild(tabChild2);
this.tabNav.percentHeight = 100;
this.tabNav.percentWidth = 100;
this.addChild(this.tabNav);
}
]]>
</mx:Script>
</mx:Application>
MyComponent Class - MyComponent.as:
packageIconUtil.as:
{
import flash.events.Event;
import mx.charts.PieChart;
import mx.charts.series.PieSeries;
import mx.collections.ArrayCollection;
import mx.containers.HBox;
import mx.containers.TabNavigator;
import mx.core.Container;
import mx.events.FlexEvent;
public class MyComponent extends Container
{
private var box:HBox = new HBox();
private var chart:PieChart = new PieChart();
private var index:int;
public function MyComponent(indx:int)
{
super();
this.index = indx;
this.label = "Tab " + this.index;
this.percentHeight = 100;
this.percentWidth = 100;
this.addEventListener(FlexEvent.UPDATE_COMPLETE, updateComplete);
}
override protected function createChildren():void
{
super.createChildren();
this.box.addChild(this.getPie());
this.addChild(this.box);
}
override protected function updateDisplayList(
unscaledWidth:Number, unscaledHeight:Number):void
{
super.updateDisplayList(unscaledWidth, unscaledHeight);
this.box.setActualSize(unscaledWidth, unscaledHeight);
this.chart.setActualSize(unscaledWidth, unscaledHeight);
}
private function updateComplete(event:Event):void
{
var tabNav:TabNavigator = TabNavigator(this.parent);
if (this.width == 0 && this.height == 0) {
var selTab:MyComponent = MyComponent(tabNav.getChildAt(tabNav.selectedIndex));
this.width = selTab.width;
this.height = selTab.height;
this.updateDisplayList(selTab.width, selTab.height);
return;
}
if (this.icon == null)
this.icon = IconUtil.getIconClass(this, tabNav.getTabAt(this.index), 150, 100);
}
private function getPie():PieChart
{
chart.percentWidth = 100;
chart.percentHeight = 100;
chart.showDataTips = true;
chart.dataProvider = this.getChartDataProvider();
var series:PieSeries = new PieSeries();
series.nameField = "label";
series.field = "data";
series.filters = [];
chart.series = [series];
return chart;
}
private function getChartDataProvider():ArrayCollection
{
var arr:Array = [];
arr.push({label:"East", data:24});
arr.push({label:"West", data:32});
arr.push({label:"North", data:22});
arr.push({label:"South", data:11});
arr.push({label:"Texas", data:5});
return new ArrayCollection(arr);
}
}
}
package
{
import flash.display.BitmapData;
import flash.events.Event;
import flash.geom.Matrix;
import flash.utils.Dictionary;
import mx.core.BitmapAsset;
import mx.core.UIComponent;
import mx.graphics.ImageSnapshot;
public class IconUtil extends BitmapAsset
{
private static var dictionary:Dictionary = new Dictionary(true);
public function IconUtil()
{
addEventListener(Event.ADDED, addedHandler, false, 0, true);
}
private function addedHandler(event:Event):void
{
if (this.parent == null)
return;
var value:Object = dictionary[this.parent];
if (value == null)
return;
this.bitmapData = value.src;
UIComponent(this.parent).invalidateSize();
}
public static function getIconClass(source:UIComponent, target:UIComponent,
width:int, height:int):Class
{
if (source.width <= 0 || source.height <= 0)
return null;
var scaleWidth:Number = width/source.width;
var scaleHeight:Number = height/source.height;
var mtrx:Matrix = new Matrix(scaleWidth, 0, 0, scaleHeight);
var bitmap:BitmapData = ImageSnapshot.captureBitmapData(source, mtrx);
dictionary[target] = {src:bitmap, w:width, h:height};
return IconUtil;
}
}
}
Sunday, February 17, 2008
7 Habits of Highly Effective Software Developers
1. Design before Coding - We developers often tend to get straight to code spending little time on design. While too much formal and complete design is useless in building software systems, some basic design before coding is extremely helpful. Design does not imply writing a text document with pictures or UML models. Design can be skeletal code that is the basis for the full implementation. I try to first build data structures that hold system state and service interface contracts (not implementations) for the entire application as part of my design exercise. I then try to think through the application flow for various usage scenarios. Good design typically pays off in the end in terms of good quality code developed in a shorter period of time.
2. Re-factor continuously - This seems contradictory to the first habit of designing before coding. After all, if the design is good why should there be a need for re-factoring? The answer is - it is impossible to design software systems accurately and fully at the very beginning. Designs tend to evolve as the system is being built. Often times requirements keep changing. No one can get it right the first time unless you are building a very simple, trivial piece of software. Re-visiting previously written code and continuously re-factoring it is one of the keys to building good software.
3. Unit Test every quarter - No that's not company's fiscal or calendar quarter. It is the 1/4 milestone of your task completion. Break up your task into four, and after each quarter, think of how you can unit test the code you have written. Note that I'm not suggesting you should write a unit test, rather you should have a good mental idea of how one would go about effectively testing what's built so far. If what you have written so far is not unit-testable, re-factor the code to facilitate unit testing before moving on to implementing the next quarter of your task. At the end of a complete task, make sure a formal unit test gets written.
4. Write squeaky-clean code - All brilliant developers that I've seen write very clean code. Clean code means paying attention to removing dead code, avoiding compiler warnings, indentation, spacing, naming, and consistency in code style. This is more than just cosmetics - if your code is not clean, it is unreadable, unmaintainable, less likely to be reused and more likely to get re-written by someone else.
5. Comment code - The only documentation we developers will ever write is comments in the code. Keeping this up to date is of critical importance for the long life of the code. Any code where the logic may not be obvious to a new peer programmer should be documented. Documenting your code is not just for others but for yourself too - how many times have you gone back to your own code written a year ago and wondered what the heck the implementation logic was?
6. Be lazy - Yes, be lazy! Don't write your own code when there is good code already available for the same task. Don't re-invent the wheel. There are a ton of open-source utility libraries that are extensively used; make use of these wherever possible. Why waste time writing code when you can borrow someone else's code which may even be better that what you could write?
7. Get your code peer-reviewed - Peer reviews often help generate new ideas and perspectives on implementation that you may not have thought about. Set your ego aside and solicit peer reviews of your code.
2. Re-factor continuously - This seems contradictory to the first habit of designing before coding. After all, if the design is good why should there be a need for re-factoring? The answer is - it is impossible to design software systems accurately and fully at the very beginning. Designs tend to evolve as the system is being built. Often times requirements keep changing. No one can get it right the first time unless you are building a very simple, trivial piece of software. Re-visiting previously written code and continuously re-factoring it is one of the keys to building good software.
3. Unit Test every quarter - No that's not company's fiscal or calendar quarter. It is the 1/4 milestone of your task completion. Break up your task into four, and after each quarter, think of how you can unit test the code you have written. Note that I'm not suggesting you should write a unit test, rather you should have a good mental idea of how one would go about effectively testing what's built so far. If what you have written so far is not unit-testable, re-factor the code to facilitate unit testing before moving on to implementing the next quarter of your task. At the end of a complete task, make sure a formal unit test gets written.
4. Write squeaky-clean code - All brilliant developers that I've seen write very clean code. Clean code means paying attention to removing dead code, avoiding compiler warnings, indentation, spacing, naming, and consistency in code style. This is more than just cosmetics - if your code is not clean, it is unreadable, unmaintainable, less likely to be reused and more likely to get re-written by someone else.
5. Comment code - The only documentation we developers will ever write is comments in the code. Keeping this up to date is of critical importance for the long life of the code. Any code where the logic may not be obvious to a new peer programmer should be documented. Documenting your code is not just for others but for yourself too - how many times have you gone back to your own code written a year ago and wondered what the heck the implementation logic was?
6. Be lazy - Yes, be lazy! Don't write your own code when there is good code already available for the same task. Don't re-invent the wheel. There are a ton of open-source utility libraries that are extensively used; make use of these wherever possible. Why waste time writing code when you can borrow someone else's code which may even be better that what you could write?
7. Get your code peer-reviewed - Peer reviews often help generate new ideas and perspectives on implementation that you may not have thought about. Set your ego aside and solicit peer reviews of your code.
Subscribe to:
Posts (Atom)