AdoptOS

Assistance with Open Source adoption

ECM

Email Templates Support Sales Conversations

KnowledgTree - Mon, 10/20/2014 - 14:34

Your team knows how effective sales enablement content is. It builds trust and consensus among buyers. KnowledgeTree pushes the most relevant sales enablement content to sales people so they share the most effective piece to use in any sales situation.

What happens next? Traditionally, when a sales person wants to share content they’ll send via email. And that means crafting a note that positions the content for prospects. But that’s yet another task that sales people shouldn’t waste time on.

Standardize Email Messaging Across Sales TeamsKnowledgeTree helps sales people to message effectively. That means discovering the best messages to share with prospects and customers. The best messages means that they’ve been proven to win in different sales situations. Not just that they are generically effective. After all, one message tuned for CIOs at banks may get a lot of use. But is it right for a VP of Sales at a manufacturer? You need to push relevant content to sales teams based on what works in their individual sales scenarios. And one of the most common ways that sales teams message to prospects is via email. So sales enablement requires that sales use the most effective email templates for a given all, even a great case study video can miss its mark if it is shared with a prospect using an off-message email. So, help sales teams to message effectively by linking email templates with content that they support. With KnowledgeTree’s OnMessage technology, sales enablement teams can create and share effective email templates with their sales teams. Instead of depending on sales people to choose which content to use and then guess at how to position it, sales enablement can provide email that puts your collateral in the best light. How Email Templates Get Applied to Content KnowledgeTree focuses on content discovery. And for an email template that supports a piece of content, you want that template linked to the content itself, so there’s no extra discovery step. In the KnowledgeTree Manager tool, sales enablement or marketing professionals can quickly add email templates. Let’s go to the Settings section. Here we can identify which content we want to add a template to. Again, because email templates are most effective when connected with content, we add it and link it to content itself. Here we add a subject and the body of the email. We can add “mail merge” type fields to automate the personalization of the email. Now we’ve set a great, standardized email template that is automatically associated with this content piece. Now, let’s switch to Salesforce.com and the perspective of a sales person. From here, KnowledgeTree recommends individual pieces of content that match my opportunity. Then, when I decide to share it with my prospect, I can choose to email it. That will open my default email client and populate the targeted, approved email content into the email. Sales people can adjust that content – but they’ve just saved significant time and effort as they no longer have to think of how to position the content via email. KnowledgeTree also gives marketing insight into the effectiveness of their content and email templates. That lets marketers tune email copy for sales people to boost prospect interest.

wistiaEmbed = Wistia.embed("tl1eyvx9ey", { videoFoam: true });

Instead, KnowledgeTree’s OnMessage technology connects email templates directly with content. That allows sales enablement teams to position each video, eBook, or other content with the best email message.

Now sales people don’t need to hunt for an email template to share. They don’t need to write a non-standard email to send to a prospect. Instead, proven and approved email templates are automatically offered to sales people as they share content. Plus, KnowledgeTree automatically embeds the trackable link to your content in the email. So, reps not only save time, but they also learn when prospects engage with their content.

Sales enablement also gets a boon. They can push best practice email templates to their sales teams. And they can measure the effectiveness of each email. So testing the effectiveness of email is easy. And they can more effectively drive prospects to their best sales enablement content.

The post Email Templates Support Sales Conversations appeared first on KnowledgeTree.

Categories: ECM

DIY: Liferay Events Hacks: Part 2

Liferay - Mon, 10/20/2014 - 11:46
A community challenge for you

Liferay's worldwide conferences generate quite a bit of data, and I am challenging the community: Take this data, and do something more interesting than a boring list of speakers and rooms. Get creative with the data (it's super-easy to digest, see the example code from my first post). Have some fun and show us how creative you can get!

What's In It For Me?

You'll win one of these:

  • Gratitude from our community and recognition from your peers that you are indeed a rockstar hacker (and a small gift from Liferay), or
  • A Tesla1

Not sure which one will be given away yet. We're still working out the details.

The Details

Liferay holds many events throughout the year, and there is a lot of data associated with them. Hundreds of speakers, sessions, and activities across global venues means a lot of data, and in a previous blog post I challenged you to take our open data stream and do something interesting with it. In that post, I documented the data related to sessions, speakers, rooms, maps, activities, sponsors, etc, and gave some example JavaScript you can copy/paste into your browser's developer console to see just how easy it is. And it's all available to you and your creative minds!

Now it's time to look at some even more interesting data: iBeacons!

If you've attended some of our recent Liferay conferences (or you're planning on attending future events), you've probably heard of iBeacons. We've been using them in several events to showcase Liferay as a mobile engagement platform and to provide value to attendees by engaging them with location and time-sensitive notifications (e.g. when walking out of a breakout session, you'll receive helpful followup information about related sessions, and a plea to provide feedback).

The way it works is pretty simple: the Liferay Events mobile app knows about these little Bluetooth transmitters we hide throughout venues (if you look around, you might spot them!). When you walk into or out of range of each beacon, or linger in a given area, the app knows what you're doing and will provide interactive notifications to you based on your movement.

But there's more -- the app also periodically records (anonymous) data regarding how many devices are within range of each beacon. Although this makes Olaf's tinfoil hat buzz with doubt and uncertainty, you (and Olaf) can rest assured we do not record anything private or identifying - it's totally anonymous.

And the best part -- the data is open for you to browse, process, and have fun with. And therein lies this challenge: channel your inner analytic/visualization geek, hook up to the data, and show everyone something interesting! It doesn't have to be enterprise-grade, bulletproof, fully cooked, or ready for deployment into production. But if it's interesting and fun, I'll do my best to show off your creation in our community.

Don't forget, the agenda/speakers/sessions/rooms data is already documented. What follows a description of the iBeacon data.

The iBeacon Data

iBeacon data can be retrieved using a JSON endpoint and specifying the event for which you want data (and optionally a time of day filter to reduce how much data you want or do realtime monitoring). You can also retrieve data for a past event or a current event (e.g. for a realtime dashboard). The event specifiers for 2014 that might have data:

  • lpsf-benelux-2014 (Benelux Solutions Forum)
  • lrfs2014 (France Symposium)
  • lr-nas-2014 (North America Symposium)
  • spain2014 (Spain Symposium)
  • lpsf-uk-2014 (UK Solutions Forum)
  • lpsf-de-2014 (Germany Solutions Forum)
  • devcon-2014 (Liferay Developer Conference)
  • brazil2014 (Brazil Symposium)
  • italy2014 (Italy Symposium)
Example URLs

1. Get all the iBeacon data for the Benelux Solutions Forum:

http://mdata.liferay.com/html/mdata-public/liferay-beacons-service-get.jsp?event=lrfs2014

2. Get all the iBeacon data for the France Symposium, but only starting at 1402583699000 (which is Thu, 12 Jun 2014 14:34:59 GMT)

http://mdata.liferay.com/html/mdata-public/liferay-beacons-service-get.jsp?event=lrfs2014&from=1402583699000

3. Get all the iBeacon data from the France Symposium between 1402583699000 and 1402583799000 (i.e. from Thu, 12 Jun 2014 14:34:59 GMT through Thu, 12 Jun 2014 14:36:39 GMT, about 2 seconds worth):

http://mdata.liferay.com/html/mdata-public/liferay-beacons-service-get.jsp?event=lrfs2014&from=1402583699000&to=1402583799000

The first example should give you 3239 results, the second about 600, and the third about 6 results. Note that some events do not yet have any data, because the event has not yet taken place. But you can use prior events for testing purposes!

The result object is always a JSON object that has a status code (stat) to indicate success or not. The code is either ok (meaning success), or something else (indicating failure). So check the stat code before doing anything else. E.g. here's an error:

{ "stat": "error: something is horribly wrong" }

And here's what success looks like:

{ "stat" : "ok", "size" : <size of result set>, "from": <earliest timestamp of result set, or specific "from" time if you gave one>, "to": <last timestamp, or specific "to" time if you gave one>, "resultSet": <JSON ARRAY OF RESULTS> }

The resultSet is itself a JSON Array.. of results. It looks like:

[ { "id": "8f6ac3d0f22afa59", "date": 1402583703156, "beacons": [ { "proximity": "far", "beacon_name": "Mystery Object 4" }, { "proximity": "immediate", "beacon_name": "Mystery Object 1" }, { "proximity": "far", "beacon_name": "Mystery Object 3" }, { "proximity": "near", "beacon_name": "Mystery Object 2" } ], "regions": ["Venue", "Salon Bonaparte"] }, <more results>,... ]

The entries in each array element of the resultSet:

  • id: A unique id (corresponding to a unique install of the app; if you reinstall the app you get a new id)
  • date: The timestamp of the ping
  • regions: a JSON Array of regions that the device was "in" at the time of the ping
  • beacons: a JSON Array of individual beacons that the device could "see" at the time of the ping. A proximity to each beacon is also included (immediate, near, or far)

To understand what a beacon region vs. individual beacon is, read this blog post!

So there you have it - what are you going to do with it?

----

1Tesla joke shamelessly stolen from Henry Nakamura

 

James Falkner 2014-10-20T16:46:30Z
Categories: CMS, ECM

Leveraging OSGi to Create Extensible Applications for Liferay 6.2

Liferay - Mon, 10/20/2014 - 04:38
It was great to participate in the Liferay North American Symposium this year. With hundreds of Liferay users (customers, partners, community members...) and dozens of presentations, it was not only a huge success but also a great opportunity to share user experiences and get your feedback. North American Symposium is over, but Liferay World Tour 2014 is not! There are still many important events in our calendar so you still have the chance to learn about Liferay latest features firsthand.    Julio Camarero and I will be talking about Extensible Liferay Applications in the Spanish Symposium next week and in the Developer Conference in early November. This is probably one of the most relevant features in Liferay 6.2 because it's meant to completely change how Liferay applications are developed. Let's find out how with a simple example:   A Shipping Cost Calculator Suppose you have an online shop and you need an application to calculate the final cost of purchasing an item, including its shipping to destination and considering not only the distance but also the currency, the local taxes and any other particularities. Thus, the final cost would be:   Final cost = [no. of items x item price] + [shipping cost to selected destination]   As a developer you could implement a very complex application that contains all possible shipping destinations. Every time you wanted to add or modify a shipping destination, you’d have to release a new version of your application. And likely your application would be more and more complex with every new release. Alternatively, you could implement just the core functions of your calculator and define the shipping destinations as extensions to your application. This way, if you needed to add or modify a shipping destination those changes would not affect to the core functions, but only to an specific extension. With this approach, the release frequency of your core application as well as its complexity would decrease. Instead, new features would be added through small extensions with their own release frequency. Modular and Extensible Applications: the OSGi Way Probably at this point you’ve already realized the benefits of the second approach: 
  • Simpler maintenance of the core application by reducing its complexity
  • Better performance (only required extensions would be installed)
  • Support for third party extensions
  • New market opportunities (e.g. purchasing shipping extensions)
This type of modular and extensible applications are defined by the OSGi  (Open Service Gateway initiative) specification. Thanks to Liferay support for OSGi since version 6.2, you can now apply this pattern to your plugins.    We recommend you to go through the documentation about OSGi apps in Liferay. For now we’ll show some quick guidelines to apply this pattern to the Shipping Cost Calculator project. You can also have a look to the complete source code of this project.   Required Services for an Extensible Shipping Cost Calculator OSGi services consist of:
  • An interface, defining the service “contract”
  • One or more implementations of the interface
To make our shipping cost calculator extensible, we need two types of OSGi services:   Shipping Extensions: The ShippingExtension interface contains the methods that any shipping extension must implement. Implementations of this interface (e.g. ShippingExtensionUSA) are annotated with @Component, which allows OSGi to dynamically detect a new shipping extension when it’s deployed. @Component(immediate = true, service = ShippingExtension.class) public class ShippingExtensionUSA implements ShippingExtension {   ShippingExtension Registry In order to have an up-to-date list with all the available shipping options, we need to track when these extensions (annotated with @Component) are deployed or undeployed. Through the @Referecene annotation the registerShippingExtension method of ShippingExtensionRegistryImpl is bound to the ShippingExtensionService, so it will be invoked every time an implementation of ShippingExtension is deployed. The unregisterShippingExtension method is called when an implementation is undeployed. @Reference( unbind = "unregisterShippingExtension", cardinality = ReferenceCardinality.MULTIPLE, policy = ReferencePolicy.DYNAMIC) public void registerShippingExtension(ShippingExtension shippingExtension) { _shippingExtensions.put( shippingExtension.getShippingExtensionKey(), shippingExtension); }   Accessing OSGi services from a non OSGi context: the ServiceTrackerUtil We’re almost done. All we need to do is to list the shipping extensions registered by the ShippingExtensionRegistry in our GUI and process the resulting form according to the selected option. Since our GUI is still a Liferay portlet, which is not handled by the OSGi service container (yet), we cannot use the @Reference annotation to obtain the ShippingExtension service. Liferay provides a util class for this purpose: the ServiceTrackerUtil. _shippingExtensionRegistry = ServiceTrackerUtil.getService( ShippingExtensionRegistry.class, bundle.getBundleContext());   You can now test the app. First deploy all modules to your Liferay server, except for the shipping extensions. Then browse any site page and add the Shipping portlet. Notice that the calculator is functional, but it displays no shipping options. Now deploy the shipping extensions one by one, refreshing the page every time. You’ll now see a list with the available shipping extensions. Selecting a shipping extension will modify the form and the final result.     Going even deeper in modularization If you have worked with the sample code, you may have noticed that the core application is not contained in a single project, but in three: 
  • shipping-api: Contains only the interfaces of the OSGi services that make up the app
  • shipping-impl: Contains the implementation of the core OSGi services of the app
  • shipping-web: Contains the user interface of the app
With this approach, the core of application can be easily modified by changing the implementation or the web interface, without changing the public API.    Audience Targeting is the first official Liferay application that is built following this OSGi way, but this is actually how all Liferay apps will be in the next version of Liferay Portal so WELCOME TO THE FUTURE!! Eduardo P. Garcia 2014-10-20T09:38:47Z
Categories: CMS, ECM

Alfresco Helps New Brunswick Public Safety Manage Critical Information

Alfresco - Mon, 10/20/2014 - 03:00

 

For the New Brunswick Department of Public Safety – a sprawling organization with 26 locations throughout the Canadian province and over 1,100 employees – efficient document and contract management was a real challenge.

The Department handles thousands of formal agreements and contracts that undergo numerous revisions and approval cycles. Simply monitoring review and expiration dates or locating signed copies of the right documents was a time consuming and manual process.

To make things even more complicated, each branch had different repositories, making it almost impossible to find information and keep up with when contracts were up for renewal.

“We needed a shared repository to allow us to better track when contracts were about to expire so that they didn’t lapse,” said Franz Weismann, assistant director of information and technology for New Brunswick’s Department of Public Safety.

After evaluating several ECM solutions, the Department chose Alfresco One based on its open source platform, GISSP-compliancy, and its ability to isolate content domains and enable business owners to directly control access to information.

The Department also operates in Canada’s only officially bilingual province, so support in both French and English was a must, as was strong records management capabilities.

“In addition to all of its high value features, the real prize with Alfresco is its ability to deliver comprehensive records management,” said Weismann. “Once our users are leveraging the solution for collaboration and document management, it’s relatively simple for them to make the jump to declaring records using the familiar Alfresco Share interface.”

Today, 100% of the Department’s employees use Alfresco in some capacity to access information and more than 5,000 documents have been uploaded into the system.

Users can find information much more quickly, redundant information has been reduced, and collaboration has improved. Alfresco was even able to replace the Department’s manual microfilming process, eliminating the need to purchase two new microfilming cameras, an estimated cost of $200K.

To learn more about how Alfresco helped New Brunswick better manage its enterprise content and improve collaboration, read the full case study here. 

Categories: ECM

Working with Liferay User Roles

Liferay - Sun, 10/19/2014 - 00:48
Liferay  have different types of roles for user. So whenever we develop portlet application we may get need to fetch user roles.   The following article will give you more about Liferay Roles   http://www.liferaysavvy.com/2014/03/liferay-building-blocks.html   Generally we have following roles in Liferay
  1. Regular Roles/Portal Roles
  2. Organization Roles
  3. Site Roles
  4. Inherited Roles
Portal Role/Regular Roles   Liferay is providing Portal Role/Regular Role for portal level. It’s not specific to anything like Organization, Site or User Group.    This role can be assigned to any user who belongs to any one of Organization, Community/Site or User Group.   Generally when we associate Portal Role/Regular Role s to any user then the association can be stored in Users_Roles Mapping Table.   To fetch Portal Role/Regular Role we can use RoleLocalServiceUtil.java class and these classes have many service methods which can fetch roles with respect to selected user.   We can use following ways to fetch user Portal Role/Regular Roles   Use RoleLocalSercviceUtil     List<Role> userRoles=RoleLocalServiceUtil.getUserRoles(themeDisplay.getUserId());     Use User Object      List<Role> userRoles1=themeDisplay.getUser().getRoles();     Site Roles    Site Role is one of the types in liferay which is only associated to Site users. This role only we can associate to site users. When we create site role we can use this site role to any used who belongs to any site liferay.   When we associate any site roles to user then association will be stored in UserGroupRole table.When ever we want get site roles then we have to use respective service class to access those roles like we need use UserGroupRoleLocalService.java class there we can find many service methods.     List<UserGroupRole> userGroupRoles = UserGroupRoleLocalServiceUtil.getUserGroupRoles(themeDisplay.getUserId()); List<UserGroupRole> siteRoles = new ArrayList<UserGroupRole>(); for (UserGroupRole userGroupRole : userGroupRoles) { int roleType = userGroupRole.getRole().getType(); if (roleType == RoleConstants.TYPE_SITE) { siteRoles.add(userGroupRole); } }     Organization Roles   Similar to site role organization role is used for organization users. This role can be associated to any user who belongs to any organization in portal.   When we associate any Organization roles to user then association will be stored in UserGroupRole table.When ever we want get Organization roles then we have to use respective service class to access those roles like we need use UserGroupRoleLocalService.java class there we can find many service methods.     List<UserGroupRole> userGroupRoles = UserGroupRoleLocalServiceUtil.getUserGroupRoles(themeDisplay.getUserId()); List<UserGroupRole> organizationRoles = new ArrayList<UserGroupRole>(); for (UserGroupRole userGroupRole : userGroupRoles) { int roleType = userGroupRole.getRole().getType(); if (roleType == RoleConstants.TYPE_ORGANIZATION) { organizationRoles.add(userGroupRole); } }    
Inherited Roles   Inherited roles really not existed in the liferay but we can see these roles in the user my account page roles section .these roles specially appear when the user can be member of user group which is assigned a role.   We can say if any roles which associates with User Group and the user is member of respective user group then role can be visible as part of inherited roles section.   Simply we can say that user directly not associated with role instead of that User Group will be associated with role and the user will be member of User Group then the roles are become as inherited role to users who are belong to User Group.   Understanding inherited roles
  • Create User Group (Photography) and assign Users to Photography User Group
  • Create role called Photography Group Member
  • Assign User Group to Role( associate role to User Group)
  Now all users who are belongs to User Group will be get Photography Group Member role as inherited role when we observe here Photography Group Member role is not directly associated with user but role is associated with Photography user group because of this Photography Group Member become as inherited role.   When we want fetch inherited roles first we need to find all User Groups of respective user so that we can fetch all inherited roles.     <% User selUser=themeDisplay.getUser(); List<Group> allGroups = new ArrayList<Group>(); List<UserGroup> userGroups = selUser.getUserGroups(); List<Group> groups = selUser.getGroups(); List<Organization> organizations = selUser.getOrganizations(); allGroups.addAll(groups); allGroups.addAll(GroupLocalServiceUtil.
getOrganizationsGroups(organizations)); allGroups.addAll(GroupLocalServiceUtil.
getOrganizationsRelatedGroups(organizations)); allGroups.addAll
(GroupLocalServiceUtil.getUserGroupsGroups(userGroups)); allGroups.addAll(GroupLocalServiceUtil.
getUserGroupsRelatedGroups(userGroups)); for(int i=0;i<allGroups.size();i++){ com.liferay.portal.model.Group group=allGroups.get(i); List<Role> groupRoles = RoleLocalServiceUtil.getGroupRoles(group.getGroupId()); if (!groupRoles.isEmpty()) { Role groupRole = groupRoles.get(0); out.println(ListUtil.toString(groupRoles,
 Role.NAME_ACCESSOR)); } } %>     Note:   When we work with above code respective Java classes should be import.   The following is sample code snippets which is in JSP page     <%@page import="com.liferay.portal.kernel.util.ListUtil"%> <%@page import="com.liferay.portal.service.GroupLocalServiceUtil"%> <%@page import="com.liferay.portal.model.Organization"%> <%@page import="com.liferay.portal.model.User"%> <%@page import="com.liferay.portal.model.UserGroup"%> <%@page import="com.liferay.portal.model.Group"%> <%@page import="java.util.ArrayList"%> <%@page import="com.liferay.portal.model.RoleConstants"%> <%@page import="com.liferay.portal.service.UserGroupRoleLocalServiceUtil"%> <%@page import="com.liferay.portal.model.UserGroupRole"%> <%@page import="com.liferay.portal.model.Role"%> <%@page import="java.util.List"%> <%@page import="com.liferay.portal.service.RoleLocalServiceUtil"%> <%@page import="com.liferay.portal.service.UserLocalServiceUtil"%> <%@ taglib uri="http://liferay.com/tld/portlet" prefix="liferay-portlet" %> <%@ taglib uri="http://liferay.com/tld/theme" prefix="liferay-theme" %> <%@ taglib uri="http://liferay.com/tld/ui" prefix="liferay-ui" %> <%@ taglib uri="http://java.sun.com/portlet_2_0" prefix="portlet" %> <portlet:defineObjects /> <liferay-theme:defineObjects /> <% List<Role> userRoles=RoleLocalServiceUtil.getUserRoles(themeDisplay.getUserId()); List<Role> userRoles1=themeDisplay.getUser().getRoles(); for (Role role : userRoles) { out.println(role.getName()); } %> <% List<UserGroupRole> userGroupRoles = UserGroupRoleLocalServiceUtil.getUserGroupRoles(themeDisplay.getUserId()); List<UserGroupRole> organizationRoles = new ArrayList<UserGroupRole>(); for (UserGroupRole userGroupRole : userGroupRoles) { int roleType = userGroupRole.getRole().getType(); if (roleType == RoleConstants.TYPE_ORGANIZATION) { organizationRoles.add(userGroupRole); out.println(userGroupRole.getRole().getName()); } } %> <% List<UserGroupRole> userGroupRoles1 = UserGroupRoleLocalServiceUtil.getUserGroupRoles(themeDisplay.getUserId()); List<UserGroupRole> siteRoles = new ArrayList<UserGroupRole>(); for (UserGroupRole userGroupRole : userGroupRoles1) { int roleType = userGroupRole.getRole().getType(); if (roleType == RoleConstants.TYPE_SITE) { siteRoles.add(userGroupRole); out.println(userGroupRole.getRole().getName()); } } %> <% User selUser=themeDisplay.getUser(); List<Group> allGroups = new ArrayList<Group>(); List<UserGroup> userGroups = selUser.getUserGroups(); List<Group> groups = selUser.getGroups(); List<Organization> organizations = selUser.getOrganizations(); allGroups.addAll(groups); allGroups.addAll(GroupLocalServiceUtil.
getOrganizationsGroups(organizations)); allGroups.addAll(GroupLocalServiceUtil.
getOrganizationsRelatedGroups(organizations)); allGroups.addAll(GroupLocalServiceUtil.
getUserGroupsGroups(userGroups)); allGroups.addAll(GroupLocalServiceUtil.
getUserGroupsRelatedGroups(userGroups)); for(int i=0;i<allGroups.size();i++){ com.liferay.portal.model.Group group=allGroups.get(i); List<Role> groupRoles = RoleLocalServiceUtil.getGroupRoles(group.getGroupId()); if (!groupRoles.isEmpty()) { Role groupRole = groupRoles.get(0); out.println(ListUtil.toString(groupRoles,
 Role.NAME_ACCESSOR)); } } %> Author Meera Prince Liferay Top Contributor Award Winner 2014, 2013 http://www.liferaysavvy.com Meera Prince 2014-10-19T05:48:20Z
Categories: CMS, ECM

Are Your Slide Presentations Putting Prospects to Sleep? Try This Instead

KnowledgTree - Wed, 10/15/2014 - 16:09

You’ve spent days (maybe weeks) creating the perfect slide presentation for a big sales meeting with a huge prospect. It’s packed with data, designed with visual interest in mind, and chalk full of interesting information that you’re sure will convince the customer to move forward.

So, when you arrive at the meeting, open up the presentation, and dive into the first few slides, you’re shocked to find a disengaged audience that’s yawning, looking at watches, and frantically tapping away on their smartphones.

You’re toast.

Unfortunately, this is how a lot of sales presentations go. In fact, according to one report, nearly one-third of adults have admitted to snoozing during a slide presentation, and 24 percent suggest they’d rather do anything other than sit through another PowerPoint presentation. Yet, slide presentations are an absolute necessity in the B2B sales world. Done right, they can be an incredibly effective medium for illustrating data, communicating pain points and value propositions, and persuading people to adopt an idea or solution you believe in.

Breathing that kind of life into a slide presentation, however, requires more than simply throwing together a series of bullet points and charts. Instead, it requires a deep understanding of your audience, and contextual detail about their specific needs, pains, and buying stage. Loaded with that information, presentations become much more personal and relevant, and decision makers can’t help but pay attention.

Then again, creating those kinds of presentations likely requires an inordinate amount of time and energy that, frankly, your sales team doesn’t have, right?

Not exactly.

The beauty of sales enablement technology today is that it employs data science to predict which messages and content (i.e. presentation slides) will resonate most in specific sales situations. And it can do all of that without much heavy lifting on the salesperson’s part. In fact, creating the perfect presentation can be done in a few simple steps. From there, the platform matches the best individual slides from archived corporate decks to each prospect, which makes the process of creating a unique, personalized deck incredibly easy.

Help Sales Deliver the Perfect Presentation

wistiaEmbed = Wistia.embed("go2lhhbtmd", { videoFoam: true });

The result? No more generic, standardized decks. Higher presentation engagement. And more, higher quality prospects in the later stages of the sales funnel.

Just as important, this process eliminates the need for sales reps to waste time hunting for the content, insight, data, or context needed to deliver highly impactful, personalized presentations. Instead, that information is directly pushed to them. And at the end of the day, that means reps can spend more time focusing on closing deals and less time worrying about whether a prospect is taking a snooze during their pitch.

The post Are Your Slide Presentations Putting Prospects to Sleep? Try This Instead appeared first on KnowledgeTree.

Categories: ECM

Automatically Generate Custom Presentations – The PerfectPitch

KnowledgTree - Wed, 10/15/2014 - 11:45

Every interaction a sales person has with a prospect should be advancing a deal. That means that each email, presentation, or conversation needs to be relevant and impactful for the prospect. That’s a sales enablement imperative.

Presentations are a primary tool for sales people to connect with prospects. Slide decks summarize and, well, present your value. They’re also critical because they are often shared and reused as prospects sell internally to their colleagues.

Sales enablement and marketing will often produce “buffet” presentations. These massive decks will include dozens or hundreds of slides that sales people can select from. Sales people have to download these decks, determine which slides to use, and then carve out a new deck.

Help Sales Deliver the Perfect Presentation

wistiaEmbed = Wistia.embed("go2lhhbtmd", { videoFoam: true });


That can be such a frustrating experience that sales people will often just use the same presentations over and over again. Which leads to old or ineffective – or untargeted content being used. And that means prospects are simply not going to respond well to your presentation.

Alternatives of just tagging content and hoping sales reps pick the right slides are not productive. Instead, sales enablement pros can rely on KnowledgeTree’s PerfectPitch technology to instantly generate tailored decks for their prospects.

Predict Which Slides to Present

KnowledgeTree takes the guesswork and legwork out of building slide decks. Our PerfectPitch technology automatically builds individualized slide decks for each sales situation. It matches content to prospects’ persona, sales stage, geography, or other element. That means with one click sales people get a generated presentation that matches their prospect’s needs.

If sales teams can’t find the perfect slides, they’ll build their own presentations

PerfectPitch auto-filters slides so only slides that matter to the prospect are selected. Sales people can easily share these custom decks right from the tool. That makes it easy to share the perfect presentation with customers and leads.

KnowledgeTree collects rich analytics about how slides are consumed by prospects. That data gets pushed back into our recommendation algorithm, ensuring that effective slides themselves get used in sales situations. Ineffective or unused slides? Sales enablement knows about it and can make them better.

When sales teams can quickly generate tailored presentations they have more effective conversations with prospects. And that means more sales.

The post Automatically Generate Custom Presentations – The PerfectPitch appeared first on KnowledgeTree.

Categories: ECM

Integrate Amazon search using AWS inside your liferay portal

Liferay - Wed, 10/15/2014 - 11:28

In this blog I will show an example on how we can integrate external applications/services inside your liferay portal.

In my example I created 2 portlets that you can use to integrate amazon search inside your portal page.

Portlet 1: "Amazon Search"

Portlet 2: "Amazon result".

If you want to run this demo  please follow the below steps:

1 -  create your ID from  http://aws.amazon.com
2 - use the AccessKey Id and register it for Product Advertising API at https://affiliate-program.amazon.com/gp/flex/advertising/api/sign-in.html .
3 - after you upload your WAR file to liferay portal. drag and drop the 2 portlets on a page.

4 - at portlet 1 "Amazon search"  click the "preference" link

5 - add the "Access Key: " and the "Secret Key: " that you create it in the 1st step

6 - this search is for "toys" category, you can edit the code to do search anywhere.

7 - in the search result portlet, you can click on the "details" link to go to amazon page.

 

you can download the code from here and the WAR file from here

if you will use the WAR file without building the code, please make sure you have JDK 7

 

Enjoy :-)

 

Thank you.

Fady Hakim 2014-10-15T16:28:35Z
Categories: CMS, ECM

The Liferay Developer Network: A New Home for Developers

Liferay - Tue, 10/14/2014 - 16:23

At Liferay's North American Symposium, I announced the immediate availability of a new website we've developed specifically for those who use Liferay and write code on its platform. We call this site the Liferay Developer Network.

This site is the new home for Liferay's documentation and, by the time it gets out of beta, Liferay's community pages. After receiving good feedback from our user community for years, we knew that the way we currently publish our documentation had some problems:

  1. Many times, people would search either on liferay.com or the search engine of their choice, for a topic they needed to know about. Often, the search results would direct them to an article in Liferay's Wiki. The Wiki over the years has become a place where well-intentioned Liferay employees and community members have placed articles describing how to use various features of Liferay. Because the Wiki has been on the site for so long, Google tends to rank its articles pretty high. Even though I've been shouting from the highest turrets that the Wiki is not Liferay's documentation, Google's indexing bot doesn't have ears.

    Unfortunately, however, the Wiki is also where user documentation goes to die. Articles are written and then abandoned by their authors. They then become out of date and actually do more harm than good, by sending people who read them down rabbit trails that were meant for older versions of Liferay. For this reason, we are retiring the Liferay Wiki. In its place, we've created Liferaypedia, a new Wiki for defining Liferay, development, and open source terms.

  2. We've also heard feedback that our current documentation pages are hard to navigate. There's a reason for this: they're just HTML versions of documentation that's organized as books. We've now changed that. The Liferay Developer Network is divided into four sections:
    • Discover: The contents of Liferay's user-oriented documentation have been re-imagined for this section. Front and center is the documentation for Liferay Portal. Using a new interface, it's much easier to find the documentation you're looking for. The Social Office section contains our Social Office documentation. The Deployment section is for Liferay systems administrators everywhere: it contains documentation for installing and configuring Liferay, including clustering.

    • Develop: This is the section for developers. When we designed it, we asked ourselves, Who is our audience, and how will they want to learn about Liferay?. What we learned was that we have a variety of readers: some want to learn in a step-by-step fashion, and some already have a project and want a quick answer to a question. To serve everybody, we created learning paths for developers new to Liferay who want to start from scratch, and we created tutorials for developers in the midst of a project who want to learn something quickly. And, of course, we have our reference documentation so you can look up APIs, tag libraries, DTDs, faces docs, and properties docs.

    • Distribute: Everything developers need to know about distributing their applications on Liferay Marketplace is right here. You can learn all about the benefits of Marketplace, how to get started, view the Marketplace User Guide, and more.

    • Participate: This is the new home for Liferay's community. As the months go by, we'll be migrating more and more of our community pages here. Currently, it contains information on how to contribute to Liferay, the aforementioned Liferaypedia, the feature ideas page, and the feedback forums.

  3. Want to contribute to the documentation like you maybe once did with the Wiki? We haven't left you out. In fact, we welcome contributions to our documentation. And here's the real kicker: because we have a team of people reviewing and updating our documentation, you can do the same thing you did with the Wiki. Submit it and forget about it. We'll take care of keeping the documentation you submit up to date with all the changes that happen to Liferay in the future. You won't have to worry or feel guilty about your submitted documentation again! Instead, you can feel good knowing that you made an important contribution to Liferay, because contributions help our community. There are three ways in which you can contribute:
    • Editing existing documentation. Every piece of documentation on the site contains an Edit with Github button. This lets you go to our documentation repository and use Github's tools to edit documentation right in your browser. When you're finished, you can send a pull request to the liferay-docs repository, and we'll review your updates and push them into the site.

    • Creating new documentation. Every section of the Liferay Developer Network has a corresponding folder in our repository. In that folder is another folder called new-articles. If you have documentation for a feature we don't currently cover, you can submit it right into this folder. You don't have to know where it goes in the rest of our docs or anything like that. We even have a shell script (Mac, Linux) and a batch file (Windows) that lets you preview your Markdown in HTML before you submit it. Submit it to the liferay-docs repository.

    • Contributing to Liferaypedia. We still have a Wiki, but it's for defining Liferay and open source terms. Currently, it's pretty bare, and we need to fill it out. We could use definitions for all kinds of terms that are germane to Liferay, like CMIS and SAML, as well as Liferay concepts like Theme Display and Service Builder.

In closing, I just want to say that we've designed this site for our community of users and developers. All design decisions were made based on feedback from our community. We know we're not perfect, however, and we may not have captured everything you've been telling us over the years. For that reason, and because we're in beta and can still change things, there's a feedback link at the bottom of every page. We welcome your feedback. Don't know where to start? How about starting with our learning paths. This is a brand new effort we're making, the learning paths aren't complete yet, and we want to make sure we're getting it right. Try reading the first learning path and let us know what you think. Will it help beginners get started with Liferay?

Or maybe you're more interested in mobile development on Liferay. We have a whole set of mobile tutorials that you could read and let us know if they're hitting the mark on Liferay mobile development.

Thanks for reading this long post, and thanks for all the great feedback we've heard on our documentation over the years. We hope that the new Liferay Developer Network serves you well as we continue to build it.

Richard Sezov 2014-10-14T21:23:23Z
Categories: CMS, ECM

KnowledgeTree’s Sales Enablement Platform Predicts What Messages Help Sales Win

KnowledgTree - Mon, 10/13/2014 - 10:36

New Platform Radically Boosts Sales Results; Uses Data Science to Identify Sales Tools that Support Winning Conversations; Drives Best Practice Messaging Across Sales Teams

KnowledgeTree, the leading sales enablement application vendor, today announced a new platform that uses data science to help sales teams have winning conversations. The platform dramatically boosts sales team effectiveness by predicting and pushing to sales teams best practice messages and content for any sales situation.

Sales enablement is now a science focused on sales team best practices

95% of sales engagements are influenced by content, according to Demand Gen Report. Sales people must use compelling messaging to engage prospects in sales conversations. But up to 30% of sales time is spent searching for or creating their own sales content.

One KnowledgeTree customer, Software AG, doubled its win-rate for a key business unit. Using KnowledgeTree to analyze which content advances leads and opportunities they added millions in dollars to their top-line.

KnowledgeTree’s Sales Enablement platform slashes wasted effort, using real-world data to predict which messages should be used in any sales scenario. The Sales Enablement platform helps sales teams to:

  • Discover the right messages at the right time. KnowledgeTree matches sales, marketing, and training content from any company source with individual sales situations. Relevant messaging is automatically pushed to sales teams, eliminating frustrating searches. Rich analytics track which content is effective in the field, so best practice content gets used by sales.
  • Pitch the best presentation to prospects. KnowledgeTree matches individual slides in corporate decks to each prospect or customer. KnowledgeTree’s PerfectPitch technology automatically generates presentations tailored to prospects and customers. No more non-standard decks; the best customized presentation always gets used.
  • Position your content with best practice emails. Even great sales content is ignored if it’s not positioned right. KnowledgeTree’s OnMessage technology lets sales enablement teams connect email templates with content and measure their results. That helps sales teams quickly use proven emails that drive prospects to content.
[See these new tools in action here.]

“Sales enablement is now a science focused on sales team best practices,” said Daniel Chalef, CEO of KnowledgeTree. “It’s not good enough to hope that sales finds and uses the right messaging. That’s why we built KnowledgeTree. To use data science to push content proven in the field to sales people when they need it most. This scientific approach to sales enablement has yielded massive results for our customers.”

“We chose KnowledgeTree because it dramatically enhances our sales team’s ability to communicate effectively with prospects and customers,” said Bill Dolby, Sr. Director of Sales Operations for RingCentral, the leading provider of cloud phone systems. “Our hundreds of sales reps don’t have to hunt for content or sales guidance. It is automatically pushed to them. That keeps our teams on message, focused on selling, and boosting our sales results.”

KnowledgeTree’s Sales Enablement platform is used by industry leaders like RingCentral, Xactly, Zuora, and more. To see a demonstration of KnowledgeTree in action, visit knowledgetree.com.

The post KnowledgeTree’s Sales Enablement Platform Predicts What Messages Help Sales Win appeared first on KnowledgeTree.

Categories: ECM

Portlet vs Widgets

Liferay - Fri, 10/10/2014 - 22:54

I got this question a lot, specially after the javascript frameworks start to show up in the enterprise applications, what is better, portlets or widgets? why?

-The most obvious different between portlet and widgets is that portlets are a server side component model (designed to execute on the server) and Widgets are client side component models (designed to run in the browser container) but  Portlets can bleed into the client as well in that more and more web interactions use AJAX for improved responsiveness, and also Portlets can emit markup that is basically a Widget to run in the browser container.
Also Widgets do have a problem if you are trying to create a more enterprise type portlet. Most of the organizations have lot of backend data processing and lot of logic or data manipulation required then using a language like Java, or doing that processing on a server vs. on a client is a better choice.

-The bad of Widgets is that their source code is downloaded and visible in the browser. sometimes this is can be a security problem as you expose the logic of your applications to the public.

- Portlets are the most mature of the choices and is covered by second versions of widely adopted Java and Oasis standards.  Widgets are relatively new and you should expect some amount of evolution and possible churn as the industry moves to standardization.
Moreover, Portlet spec is continually being defined and current.  Portlet 3.0 aka JSR-362 (https://jcp.org/en/jsr/detail?id=362) is being worked on actively by the likes of IBM, Oracle, Apache, Liferay, Red Hat, Vaadin, and others in the expert group listed on that JCP page.

- with widgets, as all the logic in on the browser, you probable will spend more time testing against different browser types based on how much logic you put into the browser.

- Portlets work best when you have interaction between portlets and pages. That’s not to say that widgets don’t pass data parameters around but portal and JSR 286 have matured enough to make it much easier to user.

- Liferay portlets have better UI interaction like mobile responsive design (redesign the UI to fit into a mobile), drag and drop it on the page or move it around. iGoogle doesnt even have this at the level that Liferay have and it is much “clunkier” feeling. Here are some blog posts with some good screenshots:

  https://www.liferay.com/web/pmesotten/blog/-/blogs/liferay-6-2-what-s-new-under-the-hood- https://www.liferay.com/web/juan.fernandez/blog/-/blogs/liferay-6-2-new-mobile-features-pt-1- https://www.liferay.com/web/juan.fernandez/blog/-/blogs/liferay-6-2-new-mobile-features-pt-2-   Hope that will help you :-) Fady Hakim 2014-10-11T03:54:18Z
Categories: CMS, ECM

Why I need a portal ? I can develop anything

Liferay - Fri, 10/10/2014 - 22:45

There is a big different between Liferay, which is a portal framework, and any other J2EE framework (Spring, struts, JSF...), which is just a development framework.

In general, you can do everything with development, but the question is how long it will take you to do that and how much money you will spend on development, maintenance, support...also the quality of the code is not always guaranteed.

With liferay, we make that work for our clients. we have hundreds of the best developers around the world to create liferay framework and all the underline integration and complex services so that our client can concentrate only on the business logic.
Instead of spending years to develop a website and integrated services, you can spend months or even weeks to have a fully functional website.

Moreover, with liferay we support our client in all the development/production phases to make sure they are always have a stable environment with the last security updates and up to date with all the new technology. Which mean for example if a new collaboration technology/portlet standard/ content management feature show up, our client will find this option/standard available in the next release, integrated ready to use with his current environment.

Liferay is a hot deployable environment, which mean in the runtime, you can add any new application/themes/customization...without any downtime or server restart. In any development framework, you will always ask for downtime for any customization/integration.

Also with liferay, we give our client the freedom to use any standard java framework to develop thier application/portlets. If you are happy with AngualrJS, Spring, struts, JSF, jsp, any javaScript framework ..... just go ahead and use it, no need to learn a new framework :-)

Fady Hakim 2014-10-11T03:45:09Z
Categories: CMS, ECM

Getting started with Vaadin in Liferay

Liferay - Fri, 10/10/2014 - 03:54

Greetings from the Liferay Symposium. The event was  great and we have had many good discussions with new and old Vaadin fans over here. And this is no wonder, like you learned earlier Vaadin-based applications took double-win the Liferay Marketplace.

Announcing the new Vaadin Liferay refcard

The updated Vaadin-Liferay refcard is out. It covers the fundamental stuff you need to know when creating portlets in Vaadin: Setting up the project, deployment models, UI composition, Liferay API integration and much more.

Just try it out

While you can get the refcard directly from Dzone, here are the basic steps needed to start development. 

1. Install Liferay 6.2 and Plugin SDK.

These are both available from Liferay website.

2. Install Liferay IDE 2.1 from Eclipse Marketplace

Make sure the liferay-m2e integration gets installed as well, otherwise the Maven project type will not be available in Eclipse wizard.

3. Create new project

Make sure you create and/or choose a Maven profile to your project and the Vaadin framework.

Congratulations! You have the first Vaadin portlet ready. The wizard generates  a full project with an example UI. It can be directly packaged and deployed to the Liferay portal either from the context menu Run As -> Run on server… or using the Liferay maven target liferay:deploy.

 

To continue building your first portlet, download the Refcard and start experimenting. You’ll notice how easy it is to create nice apps for Liferay with Vaadin.

 

Download the Dzone refcard

Sami Ekblad 2014-10-10T08:54:28Z
Categories: CMS, ECM

Setting up Liferay 6.2 on vFabric TC Server 2.6

Liferay - Fri, 10/10/2014 - 03:32
Sometimes, Tomcat is not enough to run Liferay- reasons may include performance, client's wish etc.

So here are the steps on how to configure Liferay instance on tc server 2.6. Same steps can be followed for liferay version below 6.2, at least for 6.1 I have verified.

We have D:\pFiles\LR\tc-instance\ directory where vFabric tc server is installed. And we will use D:\pFiles\LR\tc-instance\liferay62.local\ to install our lifeary instance.

  1. Lets download liferay from sourceforge
    - Liferay Portal 6.2 CE GA2
    Download portal war file - liferay-portal-6.2-ce-ga2xxx.war
    Download Dependencies - liferay-portal-dependencies-6.2-ce-ga2xxx.zip
     
  2. Unzip/Install tc server to a location of your wish. I installed to D:\pFiles\LR\tc-instance. This directory will be called TCSERVERHOME
     
  3. Now we need to create a instance of server lets say this name as liferay62. Use the following command.
    tcruntime-instance.bat create -i D:\pFiles\LR\tc-instance\liferay62 liferay62
    This will create a new instance for liferay62 in the  D:\pFiles\LR\tc-instance\liferay62.
     
  4. Now, lets install Liferay. Extract the dependencies files which we downloaded to
    D:\pFiles\LR\tc-instance\liferay62\liferay62\lib

    NOTE: You might want to know why liferay62\liferay62, In case you want to install multiple instances, you will need multiple data folders for each and similarly deploy folder. Now deploy and data folders will be created in liferay62\data and liferay62\deploy rather than tc-instance\data and tc-instance\deploy.
     
  5. We need some more files to be copied to lib directory. You can copy them from liferay tomcat bundle. Check the image below and copy those files as well. You need not to copy all the files for your db connector. Just add the one you want.
  6. Now we need to modify some configuration. For ex. memory configs etc.
    Modify the file
    D:\pFiles\LR\tc-instance\liferay62.local\liferay62.local\conf\wrapper.conf
    Change the following lines as per needed configuration
    Last 2 lines may not be present, so you can add them to the file.
    wrapper.java.additional.8="-Xmx2048M" wrapper.java.additional.9="-Xss256K" wrapper.java.additional.10="-XX:MaxPermSize=256m" wrapper.java.additional.11="-Dfile.encoding=UTF-8"
  7. Create a directory in conf and add ROOT.xml
    D:\pFiles\LR\tc-instance\liferay62.local\liferay62.local\conf\Catalina\localhost\ROOT.xml
    <Context path="" crossContext="true"> <!-- JAAS --> <!--<Realm className="org.apache.catalina.realm.JAASRealm" appName="PortalRealm" userClassNames="com.liferay.portal.kernel.security.jaas.PortalPrincipal" roleClassNames="com.liferay.portal.kernel.security.jaas.PortalRole" />--> <!-- Uncomment the following to disable persistent sessions across reboots. --> <!-- <Manager pathname="" /> --> <!-- Uncomment the following to not use sessions. See the property "session.disabled" in portal.properties. --> <!-- <Manager className="com.liferay.support.tomcat.session.SessionLessManagerBase" /> --> </Context>
  8. Change common.loader and port number of your choice in catalina.properties file. Make sure you modify both base.jmx.port and bio.http.port 

    common.loader=${catalina.base}/lib,${catalina.base}/lib/*.jar,${catalina.home}/lib,${catalina.home}/lib/*.jar,${catalina.home}/lib/ext,${catalina.home}/lib/ext/*.jar
     
  9. Extract lifeary war file to
    D:\pFiles\LR\tc-instance\liferay62.local\liferay62.local\webapps\ROOT
     

Start your liferay instance, go to D:\pFiles\LR\tc-instance\liferay62.local\liferay62.local\bin and open command prompt. tcruntime-ctl.bat start

That's it. 

 

-Ravi Kumar Gupta

Cignex Datamatics

TechD of Computer World

Ravi Kumar Gupta 2014-10-10T08:32:45Z
Categories: CMS, ECM

Liferay remote publishing - Troubleshooting

Liferay - Thu, 10/09/2014 - 20:49

During last Liferay North America symposium in Boston, I had the opportunity to attend to Máté's interesting presentation (Best Practices for Using Staging in Liferay 6.2). I have always been fascinated by this complex feature in Liferay and I have spent hours "struggling" with it in the past years while helping companies implementing and using it.

Remote publishing has been improved a lot since its first implementation and is very robust and reliable in Liferay 6.2. However, this feature is so complex that they are many situations for which the process will fail or not complete how you would expect.

I'd like to share some of my past experiences in order to help you understand how remote publishing works and how to debug and fix some common issues. Please also refer to Liferay documentation for basic understanding about Staging Page Publication and its configuration.

This applies only to Liferay 6.2+. Remote publishing has been re-implemented for this version and works differently than in previous versions. Understanding the remote publishing process

The remote publishing feature is based on Liferay's export/import functionality. There are several important steps:

  • 1. Connection: staging server establishes a connection with the remote server in order to check the configuration.
  • 2. Export: staging server exports the desired site and its content as archive (.lar) on the local storage (temp)
  • 3. Data tranfer: staging server transfers the archive to the remote server
  • 4. Checksum: remote server validates archive's integrity
  • 5. Validation: remote server checks for missing references or invalid content
  • 6. Import: remote server proceeds with the import of the content. If it fails, the entire import will be rolled back.
  • 7. Cleanup: both servers will cleanup temporary files

If you're lucky, everything will work as expected ;-)

This will most probably be the case when you test it for the first time with a small site and a few contents. You'll realize later that it becomes more tricky with complex web sites with hundreds of pages and web contents.

Remote publishing fails with errors

If your publication fails, first check the error message. Sometimes, it gives you a good advice about the issue (most probably a configuration or a missing reference).

If the error message says "Unexpected error" with strange details (FileNotFoundException, InvalidCheckSum, ...), proceed by identifying which step is failing and checking one of my advices below.

Identify which step fails

In order to identify which step fails, you need an access to both servers (staging and remote server). The remote publishing process will not clearly indicate which step failed but you can figure out by checking the servers.

  • During the export (step 2), Liferay creates a temporary file for the archive on the application server of the staging environment. For instance, check the /temp folder in Tomcat to see if you can see a new file beeing created. If this is the case, you know that Liferay is proceeding with export.
  • During the data transfer (step 3), Liferay sends the archive by splitting it into 10MB files to the remote server. The remote server receives it and stores it into the document library. If the size of the archive file in the temp folder (staging) is not increasing anymore and you see the remote server starting to create new file in the document library (/data/document_library), that means that the process is currently proceeding with the data transfer. Another way to identify the beginning of this step is by monitoring the CPU of the staging server. Export uses CPU a lot (to compress data into zip file) but data transfer doesn't.
  • Step 4 is extremely quick and you won't be able to distinguish it from the step before. Generally, you will get an obvious message (InvalidChecksumException) if the process fails during this step.
  • Validation (step 5) is also difficult to clearly identify but you should get an detailed error message about missing references when this steps fails.
  • When the process starts importing the data, you'll notice that CPU increases on remote server. You should also notice that individual 10MB files have been removed and merged into one valid package in the document library (/data/document_library). This step can take several minutes to complete.
  • Last step never fails. If the import succeeds, cleanup will also do.
Steps 3 (data transfer) and 6 (import) are the most frequent to fail. If you are not sure which step fails, it is probably one of these two. Error during 1. Connection

If you get an error during the connection (after a few seconds), check one of these:

  • Make sure that both servers have been properly configured (tunneling.servlet.shared.secret, axis.servlet.hosts.allowed, ...). Check Liferay Documentation.
  • If you're using a web proxy server (Apache HTTPD, BigIP F5, Netscaler, ...), make sure that it preserves the origin host in the proxy request or the request will be rejected by the remote server due to invalid IP address. For instance, use "ProxyPreserveHost on" configuration in Apache.
  • If you may not configure the proxy server (see previous point), consider changing the property "axis.servlet.hosts.allowed" of the remote server in order to match the IP address of the proxy server (rather than the staging server). WARNING: by doing this, you're allowing anyone to access Liferay remote API. This is unsecure.
An easy way to check connectivity to remote server is to acces /api/axis URL (wget or browser). If you're not authorized, you'll get a clear message with IP address of the requester. This will help you understand what the remote server receives. Error during 2. Export

Export should not fail. If it really does, I recommend:

  • Check that disk space is not empty on the server ;-)
  • Try to export the site (from the Control Panel) in order to see if this is really the problem
  • Try installing the latest patches (Liferay EE). Remote publishing is frequently improved by Liferay.
Error during 3. Data Transfer

This is a tricky one because you'll get strange errors and it's very difficult to debug. Try to find a good system administrator in the company who can monitor connections on the network (wireshark or similar). Common issues that I have faced:

  • Check timeouts and any other rules on proxies, servers, switches, application servers, anti-virus. If the process always fails after X minutes, there is a good chance that something on your network, between the two servers, cuts the connexion.
  • Check the size of the archive in the temp folder of staging environment. If the file is bigger than 10MB, it will be sent to the remote server in multiple pieces. Check with less data in order to see if the problem is related to that.
  • Good luck!
Error during 4. Checksum

This step should technically never fail. If it does, consider one of these recommendations:

  • Try to publish again. Maybe one sent file got corrupted during the transfer.
  • If you're using a cluster, check to see if sent files (when archive > 10MB) are clustered. This should not happen but if it does, Liferay will end up with 50% of the files on one server and 50% on the other server (assuming your cluster has 2 servers). The checksum will then always fail.
  • Check the size of the archive in the temp folder of staging environment. If the file is bigger than 10MB, it will be sent to the remote server in multiple pieces. Check with less data in order to see if the problem is related to that.
Error during 5. Validation

An error during validation will provide more information in the "remote publishing" interface (history):

  • Try to identify the missing reference and understand why Liferay is complaining about it.
  • If your site is using global references (structures, templates, categories, etc.), make sure to publish /global site first. Global site can be published from the control panel (Sites).
  • If your site doesn't have any external reference, try publishing the entire content. In the "remote publishing" options, choose "All Content".
If none of the above solutions works, you could export the site from the control panel and check the content of the archive (zip file). This might help you understand what is missing. Error during 6. Import

It's very difficult to cover all possible situations during the import step because it strongly depends on the content in your site. Common issues are:

  • If you get errors about duplicated content, try to publish the entire site.
  • Disable "Version history" to reduce the size of your publication and see if that makes any difference
  • Check logs on the remote server for more detailed information (stacktraces).
    • If you see OutOfMemoryException, increase the memory available for your server application (-Xmx)
    • If you see GenericJDBCException, check your JDBC connection pool. It might happen that all connections have been used and none released. You might need to restart your application server to free them. Check your JDBC settings according to your needs.
  • If you have the feeling that the import is really slow, monitor your infrastructure (CPU, IO access) and give more ressources to your virtual machine or server. If the import process takes too long (typically 30 minutes and more), your staging server will get a timeout back and will not cleanup properly, although the import still continues and completes (see next point).
Error during 7. Cleanup

Cleanup step will never fail but remote publishing doesn't end properly if the process takes too long. The staging server keeps waiting for the remote server to finish the import. After some time, the staging server will get a timeout and its connection will be reset. When this happens, the staging server will clean up on its side and set the status of the process to "failed". However, the import process continues on the other end (remote server) and might succeed. You'll end up with a wrong status and a "last publication date" not correctly set. To avoid this situation:

  • Try to optimize your infrastructure (CPU, Memory, IO, ...)
  • Configure timeouts (proxy server, application server, switch, etc.) for your conveniance. But don't set timeout value too high!

If you get into the situation that the publication technically succeeded but was reported as failure because of some timeout, start another publication and set the date range to "Last 12 hours". This smaller publication should execute faster, succeed and update the "last publication date" correctly. Your environment will then be ready for further publications.

Remote publishing, best practices
  • Disable version history by default (journal.publish.version.history.by.default=false). On production, you won't need all the versions but only the latest approved ones.
  • When using asset publisher, don't choose a scope which is different than "Current site" or "Global" with staging. Your asset publisher won't work after publication any more because the group ids are different on the two environments. There are strategies to make it work but they are too complicated to be explained here :-)
  • Enable and experiment remote publishing as soon as possible. Make sure to test the process with archives that are bigger than 10 MB before assuming that everything works perfectly.
Still doesn't work?

If remote publishing won't work in your environment, consider trying one of these:

  • If you're using a cluster, try to disable all but one node.
  • If a proxy server is installed in front of your application server, try publishing directly to the application server (by opening firewalls if any)
  • If using a SSL connexion for the remote publishing process, try without it (http).
  • Try moving your remote server on the same network.
  • Try moving your remote server on the same machine.
Above experimentations are not permanent solutions but will help you identify the issue.

--

Share your experience with me!

Sven Werlen 2014-10-10T01:49:00Z
Categories: CMS, ECM

Future of Work Video Series

Alfresco - Wed, 10/08/2014 - 11:06

About a year ago, we started filming a series that we called the Future of Work. We sought out experts in the field who are looking at the future of how we work from very different angles. This will probably be a long-term project that explores our work, our tools, our workplace and our place in business and tries to imagine what each will be like over the next ten years. We have material to fill several episodes and plan to release them over the next several months.

In the first episode, we explore what our workplace will be like and the role that technology will play. For this episode, we talk to Tim Tuttle (http://www.linkedin.com/in/timothytuttle), CEO and Founder of Expect Labs, Matt Mills (uk.linkedin.com/in/mattmillsprofile) of Featurespace and previously of Aurasma, and John Mancini, President of AIIM.

We chose Tim because of his background in Artificial Intelligence (AI) at MIT and his application of AI technology in the workplace with Expect Labs. We discovered Matt when we were exploring Augmented Reality (http://www.ted.com/talks/matt_mills_image_recognition_that_triggers_augmented_reality?language=en). H, he is now at Featurespace, which is applying research on behavioral analytics in the workplace. Both subject areas are fascinating. I have been involved with AIIM for several years now and it is always worth talking to John Mancini about how content and process technology will affect our workplace and how we do work.

From my personal perspective, I am hoping and expecting that technology is going to make work easier and more natural. Being an early adopter of technology over the last three decades, I can’t say that technology has always made life easier or more focused. Adopting the latest technology, although almost always fun, comes with the cost of adoption, a learning curve and a natural distraction.

My early adoption of blogging, Facebook, Twitter and Quora have always come at the expense of getting stuff done at work. I may be in the know, but it’s not necessarily helping me get my job done. What I hope and I expect is that as it develops, technology will become less distracting, more focused, more natural and simpler than preceding versions.

The technologies that Tim, Matt and John discuss may well be the solution to these problems of lack of simplicity and focus. For example, rooms that comprehend who has entered and can listen to and understand conversations will be able to provide a context to capturing and delivering information. Ubiquitous technology that includes being surrounded by touchscreen devices will make that information available wherever and whenever we need it. Artificial intelligence that is statistically driven will be able to understand what we can and cannot do and automatically provide support when we need it. Not just one, but multiple intelligent assistants will be built into all the applications we use and will be expert at how those applications work and the support you as the user need. These are pretty fantastic things to consider.

In future episodes, we will be exploring how we collaborate, what the new tools of the office will be and how our businesses will be fundamentally changed by the future of our work. It won’t be surprising that we will be focusing on information, types of activity, work process and the nature of work itself rather than the latest shiny devices. These are my interests and it is difficult to purely extrapolate technology to see where we are going. It takes a range of skills and insights to try to determine where we are going rather than just trying to figure out what the next five versions of an iPhone will be like. Therefore, we will be talking with people like Geoffrey Moore of Crossing the Chasm fame, Jimmy Wales, the creator of Wikipedia, and many other experts on the future of work to see how technology will be applied rather than just how it will evolve.

Take a look and see what you think. Join the conversation on how these and other technologies will affect the way we work and the very nature of the work we do.

Categories: ECM

Creating your own analytics platform within Liferay: A distributed commit log

Liferay - Wed, 10/08/2014 - 06:32
pre { white-space: pre; }

On last entry I made a quick overview over the proposed solution for the "problem" of building an analytics platform within the Liferay platform. Along this entry I will go deeper into the log data structure, I will present the Apache Kakfa project and we will analyse how we can connect Liferay and Kafka each other.

As a quickly reminder, I previously said that a log data structure is a perfectly fit when you have a data worflow problem. 

 

A log is a very simple data structure (possibly one of the simplest one). It is just an ordered and append only sequence of records. For those of you who are familiar with database internals, log data structures have been widely used to implement the ACID support in relational databases and its usages have evolved over time and now it used to implement replication among databases (you can take a look to many of the implementations available out there).

Ordering and data distribution are even more important when we move into the distributed systems world; you can take a look to protocols like ZAB (protocol used by Zookeeper), RAFT (consensus algorithm that is designed to be easy to understand) or Viewstamped Replication. Sadly, distributed systems theory is beyond the scope of this blog post :)

Let's move into some more practical details and let's see how we can model all the different streams of information we already have

Apache Kafka Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.
  • Kafka keeps feeds of messages in categories called topics
  • Processes that publish messages to a Kafka topic are called producers
  • Processes that subscribe to topics and process the feed of published messages are called consumers

It is not the goal of the post to cover Kafka's internals so, in case you are interested, a good documentation is available on their web page.

Connecting Kafka and Liferay

While building the first prototype of the communication channel between both systems I had a few goals in mind:

  • Easy to deploy and configure
  • Transparent for the regular user so learning a new API is not mandatory
  • Allow advanced usage of the Kafka API

I've built a small OSGi plugin which allows to bridge our Liferay portal installation with a Kafka broker through the Message BUS API. A general overview of how this integration works is shown in the next figure

The data flow depicted in the previous picture is extremely simple:

  1. The Kafka Bridge registers a new destination within the Message Bus. At the time of this writing this destination is called "kafka_destination" and cannot be changed
  2. If you want to send a message to the Kafka broker you just need to publish a message in the previous destination.
  3. The previous message needs to declare:
  • The name of the Kakfa topic we want to publish in
  • The payload with the contents of the message we want to store

You can find all the source code of the Kafka bridge at my Gitbhub repo.

A real example: publishing ratings

Let's write a small example where we publish all the blog ratings into our Kafka broker

  Everytime we create and/or update a rating we can publish a new message:   It seems there is a bug in the blogs application so Gists cannot be properly inserted. I will update the blog entry once the bug is fixed https://gist.github.com/migue/ce4cdb0925ac2eb7266d     As you can see in the previous source code there is no new API to learn, you can publish your message into the Kafka broker just using our Message Bus API. In order to test the previous example you just need to create a Kafka topic with the name used in the previous snippet and, for now, use the command line Kafka client which is included within Kafka installation   At this point we have a general overview of the system we want to build and how we can interconnect two of the main components (Liferay and Kafka). Along the upcoming entries we will build a more complex example where we will put in place the last piece of our infrastructure, the analytics side.   We will analyse some more advanced usages of Kafka, and we will introduce Spark as the foundation framework for building our analytics processes. Realtime and batch processing, machine learning algorithms or graph processing will be some of our future topics. Miguel Ángel Pastor Olivar 2014-10-08T11:32:41Z
Categories: CMS, ECM

Creating your own analytics platform within Liferay

Liferay - Tue, 10/07/2014 - 08:00
Yesterday I was talking at the Liferay North America symposium here in Boston about how you can get more value of all the data you already (even if you are not aware you already own it). It has been the first time I speak at the North America event so it has been really exciting for me (in addition, the put my talk on the big room :) To be honest I am not sure about how the talk was ... I tried to keep hidden all the gory details (at least as much as as I could) but I am not sure I succeed. The good part is I felt pretty comfortable during the talk :)   Coming back to the topic of the talk, I mainly went through some of the most popular storage and computational models available in the Big Data arena nowadays and right after that I proposed a reference architecture based on Open Source building blocks. Along this and a few upcoming blog posts I would like to share with you some of the technical details I've deliberately omitted during my talk and build a simple but powerful analytics platform on top of Liferay.   Reference architecture

On this first blog post I would like to make a quick tour over the main components of my proposed solution in order to offer a general overview of what I am trying to acomplish. A relly general overview of the final solution would be something like 

 

As you can see in the previous image this is a really simple architecture but we will discover along the future blog posts how it can turn into a powerful and useful system. I'm basically trying to build a completely decoupled system where the source of the information has nothing to do with the consumer of it.

Previous decoupling would allow us to focus our efforts, so for example, we could have a team in charge of the User Tracking generation (maybe at the client side) while other team is reading this data from the event system and doing some processing on the stream of information.

We have three main pieces within the system we are trying to build:

The first one is the sources of information. And this is something where Liferay really shines, because you have already tons of different datasources with really useful info: ratings on different entities like blog posts or message boards entries, how different entities are interrelated, all the information you have stored in the database, search indexes, system events (like transaction information, render times, ...), browsing info (this is something we've done for the Content Targeting project), and many more I'm sure I am missing at this very moment.

The second main piece is the Event System. I am calling it Event System because I think most of you would be pretty familiar with this terminology but I'm basically refering to a log. Personally I think a kind of log dat structure is the best solution when you have to solve a problem of data flow between different systems.

A log it is just a data structure where the only way you have to add some information into it is at the end, and, all the records you insert are ordered by time

We will go deeper into this datastructure in the upcoming entries and we will see how the Apache Kafka project satisfies all the requirements we have. Of course, we will see how we can interconnect Liferay with the Apache Kafka distributed log.

Last but not least, we have the third main piece of our new infrastructure: the computational and analytics side. Once we have all the information we need stored within the log, we maybe need to "move" some of this data into an HDFS cluster so we can do some data crunching, our we want to do some "real time" analysis on a stream, or maybe we just want to write some machine learning algorithm to create some useful mathematical model. Don't worry, we will go deeper in the future.

I know, I haven't included too many technical details within this entry but I will do it in the future, I promise you.

Miguel Ángel Pastor Olivar 2014-10-07T13:00:28Z
Categories: CMS, ECM

Sync 3.0 beta release

Liferay - Mon, 10/06/2014 - 15:45

 

   Sync Beta 3.0 is out!

 

 

We are very close to release Sync 3.0 and need your feedback!

We have made a complete UI revamp since the last version, take a look at the screenshots bellow:

 

           

                                 

 

Very neat, isn't it?

 

How can I become an Android Beta tester?

 

If you have an Android device and want to become a beta tester, subscribe to this Google Group:

https://groups.google.com/forum/#!forum/liferay-sync

You'll receive instructions on how to install the app once you subscribe to it.
 

How can I become an iOS Beta tester?  

We are still working on the iOS app and will release a beta version soon.

Apple's TestFlight app only runs on iOS 8 and we can only deliver the app up to 25 devices, if you want to try it out, hurry up and fill this form:

http://goo.gl/forms/YOFKdNW4kd

We will contact as soon as it is available.

Bruno Farache 2014-10-06T20:45:58Z
Categories: CMS, ECM

Angular Adventures in Liferay Land

Liferay - Sun, 10/05/2014 - 13:49
Intro

A lot of developers will probably know this feeling: you've just returned from a conference and seen lots of exciting new technologies and now you can't wait to try them out! For me one such moment and technology was Devoxx 2013 and seeing an AngularJS talk by Igor Minar and Misko Hevery. The speed and ease of development appealed very much to me, but I was a bit hesitant because of the language choice: Javascript. Javascript and me have been a bit of reluctant friends that tolerate each other, but don't need to spend a lot of time together (a bit like Adam and Jamie of Mythbusters fame). I usually try to hide all Javascript complexity behind the frameworks I use, e.g. JSF/Primefaces.  Nonetheless I did see and love the inherit challenge of trying to use a new framework and see how it can be used in what I do on a daily basis: customizing and writing portlets to use in a Liferay portal. So for this blog post I forced myself to do the following:

As mentioned before, we usually write JSF based portlets to use in Liferay and the JSF implementation of our choice at ACA has been Primefaces for a couple of years now (as far as JSF frameworks go a good choice it seems as Primefaces seems to have 'won' the JSF framework battle). With JSF being an official specification for web applications and there also being a specification now to bridge JSF to work nicely in a portal enviroment this combination has served us well. With AngularJS there will be none of these things. AngularJS is a standalone JS framework that is on one side side pretty opinionated about certain things (which usually means trouble in a portal environment), but on the other side is pretty open to customization and changes.   This blog post will try to show how AngularJS can be used to build a simple example portlet, integrate it with Liferay (services, i18n, IPC, ...), show what the technical difficulties are and how these can be overcome (or worked around). The full code of this portlet is available on Github under the MIT license: angular-portlet. Feel free to use it as a base for further experiments, as the base for a POC, whatever... and comments/ideas/questions/pull requests are welcome too.   Now on to the technical stuff!   Lots & lots of Javascript The first hurdle which I thought might give problems was using an additional Javascript framework in an environment that already has some. Liferay itself is built on YUI & AlloyUI and provides those frameworks globally to every portlet that runs in the portal environment. Liferay used to use JQuery and will be using JQuery again in future versions, but even in the current version, Liferay 6.2, it is perfectly possible to use JQuery in your portlets if you use it in noConflict mode. Also Primefaces, which uses JQuery, works fine in a Liferay portal.    AngularJS will work with JQuery when it is already provided or fall back to an internal JQLite version when JQuery isn't provided. So when I added AngularJS to a simple MVCPortlet and used it in a standard Liferay 6.2 (which has YUI & AlloyUI, but no JQuery) my fears turned out to be unwarranted: everything worked just fine and there were no clashes between the Liferay and Angular javascript files.   Not alone The next step I tried to take was to try and take the simple Hello World! application that is built in the AngularJS learn section of their website and here I immediately ran into an incompatibility: AngularJS, with its ng-app attribute, pretty much assumes it is alone on the page. In a portal, like Liferay, a portlet is usually not alone on the page. It might be, but it may never assume this. The ng-app attribute is what AngularJS uses to bootstrap itself. So long as our custom AngularJS portlet is alone on a page this will work (as is demonstrated by other people that tried this), but once you add a second AngularJS portlet to the page (or a second portlet instance of the same portlet), the automatic bootstrapping via the attribute will cause problems.   Looking around it was immediately clear that I'm not the first person that tried to use AngularJS to build portlets: All these people ran into the same problem and solved it pretty similarly: don't let AngularJS bootstrap itself automatically via the attribute, but call the bootstrapping code yourself and provide it with the ID of the element in the page that contains the Angular JS app GUI. Because we're working in a portal environment and possibly using instanceable portlets this ID needs to be unique, but this problem is easily solved by using the <portlet:namespace/> tag that is provided by the portal in a JSP and is unique by design.    In your main Javascript code add a bootstrap method that takes the unique ID of the HTML element (something prefixed with the portlet namespace) and the portlet ID (aka the portlet namespace). Inside this method you can do or call all the necessary AngularJS stuff and end with the bootstrap call. function bootstrap(id, portletId) { var module = angular.module(id); // add controllers, etc... to module module.controller("MainCtrl", ['$scope', function($scope) { // do stuff }] ); angular.bootstrap(document.getElementById(id),[id]); }

From your JSP page you can simply call this as follows:

<%@ taglib uri="http://java.sun.com/portlet_2_0" prefix="portlet" %> <%@ taglib uri="http://liferay.com/tld/aui" prefix="aui" %> <portlet:defineObjects /> <div id="<portlet:namespace />main" ng-controller="MainCtrl" ng-cloak></div> <aui:script> bootstrap('<portlet:namespace />main', '<portlet:namespace />'); </aui:script>

Bootstrapping AngularJS like this allowed me to create a simple, instanceable, Hello World! AngularJS portlet that you could add multiple times to the same page and that would work independent of each other.

From here to there

The next problem is the one I knew beforehand would cause the most problems: navigating between pages in an AngularJS app. The problem here is that AngularJS assumes control over the URL, but in a portal this is a big no-no. Everything you try to do with URLs needs to be done by asking the portal to create a portal URL for your specific portlet and action. If you don't do this and mess with the URL yourself you might at first think everything works as expected, especially with a single AngularJS portlet on the page, but with multiple you'll quickly see things will start to go wrong.

I first started off trying to make the default AngularJS routing work correctly in a portlet, at first by creating portlet specific URLs (using the namespace stuff mentioned before), and later by trying to use the HTML5 mode, but whatever I tried, I couldn't get it to work completely and consistently. After this I Googled around a lot and found several other AngularJS routing components that can be used as a replacement for the original, but here too I couldn't get them to work like I wanted.

I still think that one of these should work and I probably made one or more mistakes while trying them out, but due to time constraints I opted to go for a simple, albeit hacky, solution: using an internal page variable combined with the AngularJS ng-include attribute. The reason I settled on this is that portlets are meant to be relatively small pieces of functionality that can be used in combination with each other (possibly using inter portlet communication) to provide larger functionality. This means there will usually only be limited navigation needed in a portlet between only a small number of pages, which lets us get away with this hack without compromising the normal AngularJS workings and development speed too much.

To make this hack work you just need to add an inner div to the one we already had and add the ng-include and src attribute to it and point the src attribute to a value on your model, called page in the example, that contains whatever piece of html you want to show. <div id="<portlet:namespace />main" ng-controller="MainCtrl" ng-cloak> <div ng-include src="page"></div> </div>

In your Javascript code you only need to make sure that this page field on your model is initialized on time with your starting page and changed on navigation actions. We can easily keep partial HTML pieces as separate files in our source, by placing them in the webapp/partials directory of our Maven project, and reference them using a portlet resource URL. Constructing such a resource URL can be done using the liferay-portlet-url Javascript service that Liferay provides.

var resourceURL = Liferay.PortletURL.createRenderURL(); resourceURL.setPortletId(pid); resourceURL.setPortletMode('view'); resourceURL.setWindowState('exclusive'); resourceURL.setParameter('jspPage', '/partials/list.html'); $scope.page = resourceURL.toString();

This code can be easily moved to an AngularJS service and switching pages is as simple as calling a function using the ng-click attribute, calling the service with the correct parameters and assigning the result to the page field on the model. You can find the complete source code for this in the example portlet on GitHub.

To make sure this Liferay service is loaded and available you also need to add something to the aui:script tag in your view.jsp <aui:script use="liferay-portlet-url,aui-base">   No REST for the wicked

Now that we are able to have multiple AngularJS portlets, with navigation, on a page, the next step is to try and work with REST (or REST like) services. These could be REST services that Liferay provides or simple services your portlet provides and that AngularJS can consume.

First we'll look at the services Liferay itself provides. A subset of the services that you are accustomed to using in your Java based portlets are also available by default as REST services. These can be called using the Liferay.Service Javascript call and can be explored by using a simple app that Liferay exposes: check out /api/jsonws on a running Liferay instance. With this app you can easily explore the services, how they can be called and which parameters you'll need to provide. Calling such a service, in our case the Bookmark service, is pretty easy: Liferay.Service( '/bookmarksentry/get-group-entries', { groupId: Liferay.ThemeDisplay.getScopeGroupId(), start: -1, end: -1 }, function(obj) { // do something with result object } ); This code too can be easily moved to an AngularJS service as is shown in the example portlet on GitHub. You'll also notice the use of the Liferay.ThemeDisplay Javascript object here. It provides us with access to most of the stuff you're accustomed to using via the ThemeDisplay object in normal Liferay java code such as the company ID, group ID, language, etc... .   As with the resource URL stuff before we'll again have to make sure the Liferay.Service Javascript stuff is loaded and available by adding liferay-service to the use attribute of the aui:script tag in our view.jsp <aui:script use="liferay-portlet-url,liferay-service,aui-base"> If you need to access Liferay services that aren't exposed by Liferay as a REST service or if you just want to expose your own data I'll show you a simple method here that can be used in a MVCPortlet. We'll need to implement the serveResource method in our portlet so that we can create a ResourceURL for it and use that in our AngularJS code. public class AngularPortlet extends MVCPortlet { public void serveResource(ResourceRequest resourceRequest, ResourceResponse resourceResponse) throws IOException, PortletException { String resourceId = resourceRequest.getResourceID(); try { // check resourceID to see what code to execute, possibly using a parameter and return a JSON result String paramValue = resourceRequest.getParameter("paramName"); Gson gson = new Gson(); String json = gson.toJson(result); resourceResponse.getWriter().print(json); } catch (Exception e) { LOGGER.error("Problem calling resource serving method for '" + resourceId + "'", e); throw new PortletException(e); } } } In the actual example portlet on GitHub you'll see I've implemented this a bit differently using a BasePortlet class and a custom annotation so that you're able to annotate a normal method to signal to which resourceId it should react, but the example code above should give you the general idea. Once you have a serveResource method in place you can call it from within your AngularJS code as follows: var url = Liferay.PortletURL.createResourceURL(); url.setResourceId('myResourceId'); url.setPortletId(pid); url.setParameter("paramName", "paramValue"); $http.get(url.toString()).success(function(data, status, headers, config) { // do something with the result data }); To create a valid resourceUrl that'll trigger the serveResource method in your portlet you always need to provide it with a resourceId and portletId. Additionally you also have the option of adding parameters to the call to use in your Java code.   You promise?

When trying out these various REST services I quickly ran into problems regarding the asynchronous nature of Javascript and AngularJS. This is something that Primefaces has shielded me from most of the time, but that immediately turned out to be something that I needed to think about and apply rigorously in an AngularJS based portlet. Luckily this is a known 'problem' that has an elegant solution: promises. This means you just need to write and call your AngularJS service/factory in a certain way and all the necessary async magic will happen. For this we'll revisit the code we used to get bookmarks, this time also wrapped in a nice AngularJS factory in a separate Javascript file:

'use strict'; angular.module("app.factories", []). factory('bookmarkFactory', function($q) { var getBookmarks = function() { var deferred = $q.defer(); Liferay.Service( '/bookmarksentry/get-group-entries', { groupId: Liferay.ThemeDisplay.getScopeGroupId(), start: -1, end: -1 }, function(obj) { deferred.resolve(obj); } ); return deferred.promise; }; return { getBookmarks: getBookmarks }; });

Using this pattern, creating a deferred result and returning a promise to it from our method, we've made are call asynchronous. Now we just need to call it correctly from our controller using the then syntax:

var module = angular.module(id, ["app.factories"]); module.controller("MainCtrl", ['$scope', 'bookmarkFactory', function($scope, bookmarkFactory) { bookmarkFactory.getBookmarks().then(function(data) { $scope.model.bookmarks = data; }); }] );   Lost in translation

Now that the most important points have been tackled I wanted to move on to something that is pretty important in the country where I'm from: i18n. In Belgium we have 3 official languages, Dutch, French & German, and usually English is also thrown into the mix for good measure. So I wanted to find out to add i18n to my AngularJS portlet and while doing so see if I could make it into a custom directive. The current version of this simple directive uses an attribute and only allows retrieving a key without providing and replacing parameters in the value.

module.directive('i18n', function() { return { restrict: 'A', link: function(scope, element, attributes) { var message = Liferay.Language.get(attributes["i18n"]); element.html(message); } } });

The directive above uses the Liferay.Language Javascript module to retrieve the value of the given resource key from the portlets language bundles and sets this as the value of the tag on which the directive is used. To be able to use this Liferay Javascript module we'll again need to add something to the use attribute of the aui:script tag to make sure it is loaded and available: liferay-language. Once we have this directive using it in our HTML partials is pretty simple:

<h2 i18n="title"></h2>

This piece of HTML, containing our custom i18n directive, will try to retrieve the value of the title key from the portlet's resource bundles that are defined in the portlet.xml. The important word in the previous sentence is try, because it seems there's a couple of bugs in Liferay (LPS-16513 & LPS-14664) which causes the Liferay.Language Javascript module to not use the portlet's resource bundles, but only the global ones. Luckily there is a simple hack to will allow us to still make this work: add a liferay-hook.xml file to the portlet and use it to configure Liferay to extend its own language bundles with our portlet's.

<?xml version="1.0"?> <!DOCTYPE hook PUBLIC "-//Liferay//DTD Hook 6.2.0//EN" "http://www.liferay.com/dtd/liferay-hook_6_2_0.dtd"> <hook> <language-properties>Language.properties</language-properties> </hook>   Can you hear me?

After taking all the previous hurdles you end up with a pretty usable portlet already, but the last thing I wanted to try was to get AngularJS portlets to talk to each other using IPC (Inter Portlet Communication) or something similar. As AngularJS is a Javascript framework the obvious choice would be to use the Liferay Javascript event system: Liferay.on & Liferay.fire. Integrating this in my controller turned out to be not as straightforward as I expected, but once I threw promises and $timeout into the mix I got the result I expected at the beginning. To fire an event just use the following in your code:

Liferay.fire('reloadBookmarks', { portletId: $scope.portletId });

You can use any event name you like, reloadBookmarks in our case. or even had multiple events. We also pass in the portletId ourself as this doesn't seem to be in the event data by default anymore and it can be useful to filter out unwanted events on the portlet that is the originator of the event. This event data can be basically any Javascript/JSON object you want.

Liferay.on('reloadBookmarks', function(event) { if (event.portletId != $scope.portletId) { $timeout(function() { bookmarkFactory.getBookmarks().then(function(bookmarks) { $scope.model.bookmarks = bookmarks; }); }); } });   Conclusion

It took some time and a lot of Googling and trying out stuff, but in the end I can say I got AngularJS to work pretty good in a portal environment. It is usable, but for someone that is used to spending most of his time in Java and not Javascript it was quite a challenge, especially when something fails. The error messages coming from Javascript aren't always as clear as I'd want them to be and the IDE support (code colouring, code completion, debugging, ...) isn't up to what I'm used to when writing JSF/Spring based portlets. But I think the following image expresses my feelings pretty good:

 

More blogs on Liferay and Java via http://blogs.aca-it.be.

Jan Eerdekens 2014-10-05T18:49:13Z
Categories: CMS, ECM
Syndicate content