2018 Agenda
For the 2018 Megaputer Analytics Conference, there will be five full days of festivities to grow, inspire, and build a stronger foundation of data & text analytics. Below is the agenda for each day of the event.
LATEST VERSION 10/31/18
For the 2018 Megaputer Analytics Conference, there will be five full days of festivities to grow, inspire, and build a stronger foundation of data & text analytics. Below is the agenda for each day of the event.
LATEST VERSION 10/31/18
The conference agenda may change to reflect guest speaker talks and relevant changes in analytical trends. The online agenda is smart-phone and/or tablet friendly. A hard copy of the agenda will be provided in your 2018 Megaputer Analytics Conference welcome packet distributed at registration.
8:00 AM – 8:35 AM
Presented by Brian Howard, Sales & Marketing Manager
We will kick off the conference by introducing PolyAnalyst, Megaputer’s analytics software. This comprehensive tool has so much power and flexibility, its hard to know where to begin. This introduction will help attendees lay a basic foundation to support knowing what tracks and topics during the conference will be most relevant to their needs and interests.
8:35 AM – 9:10 AM
Presented by Jeffrey Palan, Data Analysis Consultant
Learn how a number of different types of data from different sources can be loaded and used in PolyAnalyst, whether by joining or processing separately.
9:10 AM – 9:45 AM
Presented by Yi Wang, Senior Data Analysis Consultant
This workshop will introduce the basic functions of the Data Audit node, Category Replacement node, and Spell Check node that are often used for data cleansing.
9:45 AM – 10:20 AM
Presented by Zhen Li, Data Analysis Consultant
Come and learn about the simplicity manipulating data in a fast and effective way using PolyAnalyst. Topics covered include data joining, data merging, column modification, row modification, aggregation and distinction.
10:20 AM – 10:30 AM
Please enjoy a quick break, refresh with a beverage and mingle with other attendees.
10:30 AM – 11:05 AM
Presented by Wilson Zhou, Data Analysis Consultant
Learn how clustering methods can provide insight into key variables and data segments. Followed by seeing how dimension reduction is used to examine various clusters across variables in order to identify trends and patterns of interest.
11:05 AM – 11:40 AM
Presented by Jason Liu, Senior Data Analysis Consultant
This demo shows how to use multiple distribution analysis tools to identify the targets that are worthy of further investigation of fraud.
11:40 AM – 12:15 PM
Presented by Kathryn Verhoeven, Data Analysis Consultant
Knowing how to accurately and efficiently classify data is essential for capturing valuable results. This session will show PolyAnalyst users the various tools available for performing data classification tasks.
12:15 PM – 1:20 PM
Lunch will be provided with a longer break to network with attendees and the Megaputer staff.
1:20 PM – 1:55 PM
Presented by Chris Farris, Data Analysis Consultant
Learn how to effectively utilize PolyAnalyst’s predictive models from selecting a model, to establishing testing sets, to refining variables for better modeling.
1:55 PM – 2:30 PM
Presented by Chris Farris, Data Analysis Consultant
Walk through the process of creating a predictive model using data in the credit industry.
2:30 PM – 3:05 PM
Presented by Wilson Zhou, Data Analysis Consultant
As is common practice, knowing what products are purchased with others can be help retailers know what other items to suggest before a customer clicks to checkout. In this presentation, you will learn how to perform basket analysis on transactions data to determine what products customers purchase together.
3:05 PM – 3:15 PM
Please enjoy a quick break, refresh with a beverage and mingle with other attendees.
3:15 PM – 3:50 PM
Presented by Bipin Inamdar, Senior Data Analysis Consultant
Social networks exist in all situations where individuals or systems interact with each other: social media, email communications, academic publishing, computer networking and more. Learn how you can use PolyAnalyst for identifying the points of maximum leverage within a network for efficiently influencing its behavior.
3:50 PM – 4:25 PM
Presented by Chris Farris, Data Analysis Consultant
How can we make the best use of data that comes from multiple sources? In this workshop we will learn how to apply techniques to identify unique entities such as people or companies. Then we’ll perform what is often called fuzzy matching to build entity profiles containing information in one unified place.
4:25 PM – 5:00 PM
Presented by Elli Bourlai, Computational Linguist / Data Analysis Consultant
This demo showcases how to automate the identification of influential medical researchers using data from public research databases and the Social Network Analysis node.
8:00 AM – 8:50 AM
Presented by Margaret Glide, Computational Linguist / Data Analysis Consultant
How does a machine understand human language? We will explore your favorite text analytics nodes to answer that question.
8:50 AM – 9:40 AM
Presented by Jeffrey Palan, Data Analysis Consultant
Learn to make analysis easier by learning some techniques for cleansing your data and maintaining good dictionaries.
9:40 AM – 10:20 AM
Presented by Zhen Li, Data Analysis Consultant
Learn how to exploit useful information and gain business insight in survey data using PolyAnalyst. Taxonomy, sentiment analysis and various visualization tools will be covered briefly.
10:20 AM – 10:30 AM
Please enjoy a quick break, refresh with a beverage and mingle with other attendees.
10:30 AM – 11:05 AM
Presented by Wilson Zhou, Data Analysis Consultant
Learn how to use sentiment analysis to better gauge customer satisfaction about your product and services. This session will show you popular methods for performing sentiment analysis within PolyAnalyst™.
11:05 AM – 11:40 AM
Presented by Jason Liu, Senior Data Analysis Consultant
The hands-on experiences about how to create a taxonomy in different ways and utilize the most-frequently-used features of taxonomy node.
11:40 AM – 12:15 PM
Presented by Min Chen, Data Analysis Consultant
This session will pick up from the foundation laid in previous session (Part 1) by teaching how to correctly classify documents in an efficient manner using advanced PDL queries.
12:15 PM – 1:20 PM
Lunch will be provided with a longer break to network with attendees and the Megaputer staff.
1:20 PM – 1:55 PM
Presented by Elli Bourlai, Computational Linguist / Data Analysis Consultant
Learn the basics of automated Entity Extraction in PolyAnalyst with XPDL for extracting results and studying attribute relationships in your data.
2:30 PM – 3:05 PM
Presented by Wilson Zhou, Data Analysis Consultant
Check out the new Fact Extraction node and explore connections within the results of the Entity Extraction node.
1:55 PM – 2:30 PM
Presented by Rebecca Hale, Lead Computational Linguist
Learn how to use the Entity Extraction node to validate your results, generate a new dataset for them, and quickly test and modify your rules in the Properties view.
3:05 PM – 3:15 PM
Please enjoy a quick break, refresh with a beverage and mingle with other attendees.
3:15 PM – 3:50 PM
Presented by Jeffrey Palan, Data Analysis Consultant
An in-depth look at loading different types of data into PolyAnalyst, as well as how to deal with data that is in multiple languages.
3:50 PM – 4:25 PM
Presented by Zhen Li, Data Analysis Consultant
This session will review tips and tricks on how we can use PolyAnalyst more efficiently and effectively. Open discussion is encouraged to talk through useful workarounds and detail best practices for reducing processing time and maintenance.
4:25 PM – 5:00 PM
Presented by Chris Farris, Data Analysis Consultant
What does a complicated PolyAnalyst project look like? Join us in this demo to see the inner workings of an important text analysis solution in the insurance industry and piece together all of the various components you have and will learn more about in the conference.
8:00 AM – 8:35 AM
Welcome to the 2018 Megaputer Analytics Conference
8:35 AM – 9:10 AM
Presented by Jeffery Palan, Data Analysis Consultant
PolyAnalyst is always adding new functions and features. Get an overview of some of the more important improvments at this presentation.
9:10 AM – 9:45 AM
Presented by Sergei Ananyan, CEO
Megaputer is excited to continue unveiling new features of PolyAnalyst’s new Web Report Platform. This session will demonstrate the features and interface of the new Web Report Platform as well as cover what’s coming in the next version of PolyAnalyst.
BREAKOUT SESSIONS
TRACK 1 BUSINESS
TRACK 2 DEMOS
TRACK 3 TECHNICAL
10:05 AM – 10:40 AM
Presented by Margaret Glide, Computational Linguist / Data Analysis Consultant
Tell the story behind your information extractions: make real world connections between your Entity Extraction results and explore the complex web of information relationships with the new Fact Extraction node.
Presented by Brian Howard, Sales Manager
Effective and wholesome customer survey analysis requires the analysis of open-ended text responses. And for most companies like the globally reaching electronics distributor we will be using as an example, sentiment analysis is a focal point for understanding their customers. Join me to see a demonstration of a text analysis project and web report showcasing techniques for building custom dictionaries, extracting key topics from data-driven analysis, and visualizing and tracking customer satisfaction in multiple ways.
Presented by Rebecca Hale, Lead Computational Linguist
This presentation goes beyond the basics of Text Analytics by taking an in-depth look at Computational Linguistics, the bones of text analysis. Attendees will learn how understanding Computational Linguistics increases the accuracy of analysis, creating richer data and leading to better business discoveries.
10:40 AM – 11:15 AM
Presented by Jason Liu, Senior Data Analysis Consultant
This presentation illustrates how a news analysis solution helps a major bank to monitor its customers and investees.
Presented by Kathryn Verhoeven, Data Analysis Consultant
A wealth of valuable information is hidden in the unstructured text data of customer surveys. This session will demonstrate various data-driven analysis techniques that can be used to improve a survey analysis project.
Presented by Sergei Ananyan, CEO
Most techniques for relating textual information rely on intellectually created links such as chosen keywords, authority indexing terms, or bibliographic citations. Similarity between semantic content of whole documents offers an attractive alternative. Latent semantic analysis provides an effective dimension reduction method for the purpose of topic modeling.
11:15 AM – 11:50 AM
Speaker Panel / Round Table
Guest Speakers, Jonathan Frey & Eric Su, will be discussing the application & difficulties in advanced analytics
Presented by Bipin Inamdar, Senior Data Analysis Consultant
Learn how to build taxonomies efficiently for gaining meaningful insights into natural-language feedback from your customers.
Presented by Min Chen, Data Analysis Consultant
Learning how to proficiently use PDL, PolyAnalyst’s proprietary search query syntax, will greatly help users harness the power of PolyAnalyst’s text mining capabilities. This session will highlight techniques for creating better search queries in PolyAnalyst.
1:10 PM – 1:45 PM
Presented by Elli Bourlai, Computational Linguist / Data Analysis Consultant
This presentation discusses the benefits and challenges associated with sentiment analysis and presents the new domain-specific Sentiment Analysis solution of PolyAnalyst.
Presented by Wilson Zhou, Data Analysis Consultant
Learn how to build informative web reports explore the result of a voice of customer survey. You will learn our new features of web reports, and how it is empowered within PolyAnalyst™.
Presented by Yi Wang, Senior Data Analysis Consultant
This presentation will review a variety of PDL functions with parameter settings for building complex and advanced queries.
1:45 PM – 2:20 PM
Presented by Kathryn Verhoeven, Data Analysis Consultant
Social media use continues to grow, with consumers increasingly interacting directly with companies about their products and services. This presentation will highlight the use of data and text analysis to monitor Facebook, Twitter, and social news sources and how companies can gain useful insights.
Presented by Bipin Inamdar, Senior Data Analysis Consultant
Case study of how PolyAnalyst is used for gaining operational insights from periodic and exit interviews of the employees of a major US healthcare provider.
Presented by Min Chen, Data Analysis Consultant
This demo will cover how different dictionaries including ontologies are maintained and used in PolyAnalyst.
2:20 PM – 2:55 PM
Jonathan Frey, Co-Founder and Principal Consultant, Peninsula Business Intelligence
The importance of great customer service to building a strong business has prompted increased interest in applying text analytics to customer data. To demonstrate the power data and text mining techniques, we present two case studies outlining the use of advanced text analytics for the analysis of Voice of Customer (VoC) data collected from both external and internal customers of Taco Bell Corporation. External VoC data was collected through several channels over 3 years and analysed using text mining techniques. Using historical data, a detailed taxonomy of keywords that typically occur in customer comments was developed to characterize these comments into meaningful categories, with sentiment scoring rules applied to the keywords for further classification. Over 2,000,000 customer contacts were analysed, and the findings were correlated to the structured data collected on the surveys, providing key insights on product, service, and facility topics. The impact on overall satisfaction was measured for each topic area, providing focus for the operation of the restaurant. A second case study highlights how data and text mining techniques provided actionable insights, allowing the internal Information Technology support service desk to implement changes that improve call answering times and reduce the impact of these issues on sales transaction revenue
3:15 PM – 3:50 PM
Presented by Bipin Inamdar, Senior Data Analysis Consultant
Warranty claims provide a window into how your product or service is working for the customer. PolyAnalyst makes consuming warranty claims data – which is often textual, technical and messy – easy.
Presented by Margaret Glide, Computational Linguist / Data Analysis Consultant
How do customers feel about your products or services? This session shows you three methods to effectively perform Sentiment Analysis within PolyAnalyst™.
Presented by Rebecca Hale, Lead Computational Linguist
This presentation will explore the need for XPDL to extract information, rather than only classifying it with PDL, and discuss different types of entities and relationships that can be designed for domain-specific knowledge targeting in text analysis systems.
3:50 PM – 4:25 PM
Presented by Zhen Li, Data Analysis Consultant
We will walk you through the process of automobile repair notes in PolyAnalyst using a demo PA project. Entity extraction of the notes, significant issues with different car model and dealer anomaly behavior detection will be included.
Presented by Min Chen, Data Analysis Consultant
This demo will go over an actual project on social media analysis of product review data, the main focus will be how to efficiently create taxonomy according to existing categorization structure and adjust queries from poor quality training labels.
Presented by Margaret Glide, Computational Linguist / Data Analysis Consultant
How can we use XPDL to give us the most accurate business insights? In this presentation, we’ll see how we can use the default entities and some simple XPDL principles to leverage your specific business needs.
4:25 PM – 5:00 PM
Presented by Nobuyuki Fukawa, Associate Professor of Marketing
As visual marketing gains a more critical role in marketing communications, consumer eye-tracking data has been utilized to assess the effectiveness of those marketing efforts. With eye-tracking data, researchers can capture consumers’ visual attention effectively and may predict their behavior better than with traditional memory measures. However, due to the complexity of data: its volume, velocity and variety, known as 3Vs of Big Data, marketers have been slow in fully utilizing eye-tracking data. These data properties may pose a challenge for researchers to analyze eye-tracking data, especially gaze sequence data, with traditional statistical approaches. Commonly, researchers may analyze gaze sequences by computing average probabilities of gaze transitions from a particular area of interest to another area of interest. When the variance of gaze sequence data in the sample is small, this method would uncover a meaningful “global” trend, a trend consistent across all the individuals. However, when the variance is large, this method may not enable researchers to understand the nature of the variance, or the “messiness” of data. In this presentation, first, to overcome this challenge, we propose an innovative method of analyzing gaze sequence data. Our proposed method enables researchers to reveal a “local” trend, a trend shared by only some individuals in the sample. Second, we illustrate the benefits of our method through analyzing gaze sequence data collected in an advertising study. Finally, we discuss the implications of our proposed method.
Presented by Jason Liu, Senior Data Analysis Consultant
This demo shows how a production solution solves specific task, and how the key elements of the production solution are created.
Presented by Di Cao, Data Analysis Consultant
Learn how to use XPDL to efficiently define and extract patterns between parts and issues.
BREAKOUT SESSIONS
TRACK 1 BUSINESS
TRACK 2 BUSINESS
TRACK 3 TECHNICAL
8:00 AM – 8:35 AM
Presented by Kathryn Verhoeven, Data Analysis Consultant
Automated text analysis systems enable businesses to efficiently monitor the competitive landscape. By performing in-depth analysis of mainstream and industry-specific news sources, businesses can monitor the activities of competitors, reveal emerging customer needs, and identify new market trends and promising technologies. This presentation will highlight the key tasks, challenges, and results provided by an automated competitive intelligence solution.
Presented by Chris Farris, Data Analysis Consultant
Information is often locked behind unstructured text and needs to be extracted. In this presentation learn how advanced text analysis techniques allow us to extract complicated and extremely sensitive information from claims notes for later use.
Presented by Ruonan Liu, Data Analysis Consultant
Data cleansing and data management is the process we clean and reorganize our data. It’s the foundation of generating more reliable results for analysts. We will cover duplicate removal, typo correction, abbreviation expansion and data integration in this topic.
8:35 AM – 9:10 AM
Presented by Kathryn Verhoeven, Data Analysis Consultant
Automated text analysis systems enable businesses to efficiently monitor the competitive landscape. By performing in-depth analysis of mainstream and industry-specific news sources, businesses can monitor the activities of competitors, reveal emerging customer needs, and identify new market trends and promising technologies. This presentation will highlight the key tasks, challenges, and results provided by an automated competitive intelligence solution.
Presented by Chris Farris
You have seen an overview of how information is extracted – now take a dive into some complications. Parties can be referenced in many different ways. Learn how to unify these references together to solve a complicated text analysis problem.
Presented by Min Chen, Data Analysis Consultant
This presentation will go over different machine learning tools in PolyAnalyst.
9:10 AM – 9:45 AM
Presented by Yi Wang, Senior Data Analysis Consultant
We will walk through a case project of customer support data for a pharmaceutical company and mentions several aspects of ways exploring the data to get insights.
Presented by Jeff Palan, Data Analysis Consultant
A look at how text analytics can improve subrogation prediction analysis.
Presented by Owen Shi, Lead HCI/UX Designer
Learn the current capability, the new design and the future strategies of data visualization tool of PolyAnalyst™
10:05 AM – 10:40 AM
Presented by Jason Liu, Senior Data Analysis Consultant
This presentation talks about what ontologies and public data sources in Pharma are available in PolyAnalyst.
Presented by Chris Farris, Data Analysis Consultant
Be exposed to how information is extracted inside of PolyAnalyst. See for yourself how this intricate text analysis is performed within the system and learn how you can generalize the techniques for your own needs.
Presented by Bingqing Huang, HCI/UX Designer
This demo showcases bubble chart, statistical widget and GIS map in Web Report. It includes a brief presentation about benefits and examples of using these visulizations and follows by a live demo of how to create them in PolyAnalyst.
10:40 AM – 11:15 AM
Yi Wang, Senior Data Analysis Consultant
How does PolyAnalyst handle with adverse event case reports to extract adverse events and map them to proper medDRA dictionary Preferred Terms.
Presented by Ruonan Liu, Data Analysis Consultant
Fraud in P&C insurance takes 10 percent of the payout which is around 34 billion dollars annually. Fraud cuts profits for insurers, limits their ability to offer competitive premiums to their customers and worsens their loss and combined ratios. The policyholders Benefits also suffer due to higher premiums.By reducing fraudulent claims, insurance companies will see a significant increase in their profits.
Presented by Min Chen, Data Analysis Consultant
Structured data analysis involves the development of statistical models based on numbers, dates, and categories. This session will explore the different capabilities PolyAnalyst has, such as: finding associations, numerical modeling, anomaly detection, time series, etc.
11:15 AM – 11:50 AM
Presented by Jason Liu, Senior Data Analysis Consultant
This demo shows how exactly can we classify adverse events to MedDRA.
Presented by Brian Howard, Sales & Marketing Manager
Automated data analysis systems have become a valuable fraud detection tool for healthcare insurance companies, particularly in detecting anomalies and suspicious activity that is often missed through manual analysis. This presentation will highlight a study involving the implementation of data and text analysis for healthcare insurance fraud detection.
Presented by Di Cao, Data Analysis Consultant
This presentation talks about finding tables from different kinds of documents and how we use fuzzy logic to extract information and relations from those tables.
1:10 PM – 1:45 PM
Presented by Di Cao, Data Analysis Consultant
Delicate medical device may have thousands of parts, finding those parts and their corresponding issues is a great challenge. This presentation will show you how we create a semi-automated analysis process that can extract relations between parts and issues.
Presented by Min Chen, Data Analysis Consultant
Healthcare professionals benefit by having a system that automatically extracts clinical findings, but a system that goes one step further, and interprets clinical findings, is indispensable. This presentation will describe such a system.
Presented by Elli Bourlai, Computational Linguist / Data Analysis Consultant
Analyzing textual data in multiple languages is something few software systems have mastered. This session presents the multilingual capabilities of PolyAnalyst at each stage of a project workflow.
1:45 PM – 2:20 PM
Presented by Eric Su, Constultant, Eli Lilly & Co.
Millions of documents from acquired companies are unclassified. Classifying these documents is desirable for browsing and searching. Presented here are techniques and challenges of rule- and machine learning-based classification using labeled internal documents as training data.
Presented by Jeff Palan, Data Analysis Consultant
A in-depth look at medical coding, EMRs, and the challenges and potential solutions available for automation.
Presented by Bipin Inamdar, Senior Data Analysis Consultant
What comes after PolyAnalyst identifies issues to resolve? Case management system allows you to stay on top of the resolution process.
2:20 PM – 2:55 PM
Presented by Zhen Li, Data Analysis Consultant
We will walk you trhough the process of medical device support data analysis in PolyAnalyst using a demo Project. The focus will be on extracing key information from the avalanche of text data using entity extraction techniques.
Presented by Yi Wang, Senior Data Analysis Consultant
This demo will show several basic ways that PolyAnalyst automatically or manually anonymizes the personal information such as people name, email address etc.
Presented by Bipin Inamdar, Senior Data Analysis Consultant
Automate PolyAnalyst and make it a part of your wider software ecosystem.
3:15 PM – 3:50 PM
Presented by Kathryn Verhoeven, Data Analysis Consultant
The rate of new research publications in any given field continues to increase, making it difficult for researchers to stay updated on current and potential research areas of interest. This presentation will examine how automated text analysis techniques can help researchers obtain meaningful literature summaries, perform knowledge discovery, and identify trends.
Presented by Di Cao, Data Analysis Consultant
Learn how to build an automated analysis process to help you extract useful insights from employee and customer surveys with PolyAnalyst new features.
Presented by Pavel Anaschenko, Chief Technology Officer
This presentation will discuss how data and text analysis systems can adapt to handle such large data sets, and provide an overview of best practices for performing text analysis on a BIG scale.
3:50 PM – 4:25 PM
Presented by Yi Wang, Senior Data Analysis Consultant
We will talk through how PolyAnalyst loads, cleansing, indexing the standard resource data and how to find some useful patterns in those pre-indexed data.
Presented by Kathryn Verhoeven, Data Analysis Consultant
Public research resources and databases such as PubMed offer valuable and up-to-date information on the state of medical research. This presentation will show how automated text analysis can be applied to gain insights from medical research data.
Presented by Owen Shi, Lead HCI/UX Designer
Learn how Megaputer design team provides human-centered design solution to satisfy the user needs.
4:25 PM – 5:00 PM
Presented by Sergei Ananyan, CEO
Find out where Megaputer is focusing its efforts in the ever changing world of data and text analytics.
8:00 AM – 8:35 AM
Presented by Rebecca Hale, Lead Computational Linguist
Learn to utilize advanced PDL search query techniques for finding more precise patterns. We will cover tips for taking your PDL queries to the next level to build a foundation for writing good XPDL.
9:10 AM – 9:45 AM
Presented by Di Cao, Data Analysis Consultant
We will work through the ins and outs of the Entity Extraction node during this session. Discover new features, rule-writing shortcuts, and debugging tools to help you save time and write more effective rules.
8:35 AM – 9:10 AM
Presented by Margaret Glide, Computational Linguist / Data Analysis Consultant
Learn XPDL syntax basics to extract results and study attribute relationships within your data.
9:45 AM – 10:05 AM
Please enjoy a quick break, refresh with a beverage and mingle with other attendees.
10:05 AM – 10:40 AM
Presented by Rebecca Hale, Lead Computational Linguist
Speed up your Entity Extraction node with child rules! Now that you’ve learned how to write single XPDL rules for quick and easy entity extraction, learn how to effectively utilize child rules and bookmark queries with named groups to get those entities extracted in a fraction of the time.
10:40 AM – 11:15 AM
Presented by Elli Bourlai, Computational Linguist / Data Analysis Consultant
How can we filter our results or exclude unwanted results using XPDL? This workshop introduces the filter and exception rules through simple tasks.
11:15 AM – 11:50 PM
Presented by Di Cao, Data Analysis Consultant
Developing custom dictionaries can take time, in this presentation, we will demonstrate how you can use XPDL to build custom dictionaries more quickly, so you can spend more time on your analysis and results.
11:50 AM – 1:10 PM
Lunch will be provided with a longer break to network with attendees and the Megaputer staff.
1:10 PM – 1:45 PM
Presented by Margaret Glide, Computational Linguist / Data Analysis Consultant
Modify, normalize, and format your results with XPDL Output functions to make the most of them.
1:45 PM – 2:20 PM
Presented by Zhen Li, Data Analysis Consultant
Postprocessors provide you comprehensive tools for further processing the results of your XPDL rules, including case or form normalization, synonym merging, semantic links, trimming, string replacement, and more. In this session, we will walk you through each type of postprocessor with real examples and discussions about when and how to use each them. Perfect your entity extraction results with the postprocessors of XPDL!
2:20 PM – 2:55 PM
Presented by Elli Bourlai, Computational Linguist / Data Analysis Consultant
Learn how to resolve overlapping results and improve accuracy in the Entity Extraction node.
2:55 PM – 3:15 PM
Please enjoy a quick break, refresh with a beverage and mingle with other attendees.
3:15 PM – 3:50 PM
Presented by Margaret Glide, Computational Linguist / Data Analysis Consultant
Condense frequent query templates with basic macros to make them more readable, reusable, and easy to edit.
3:50 PM – 4:25 PM
Presented by Rebecca Hale, Lead Computational Linguist
Store frequent XPDL rule patterns with advanced XPDL macros to save editing time and keep your rules tidy.
4:25 PM – 5:00 PM
Presented by Sergei Ananyan, CEO
By addressing the challenges and presenting new ideas and implementation areas, this session will uncover the underlying value of a powerful entity unification feature.