首页(175) 数据挖掘研究(27) 数据挖掘实践(53) 数据挖掘介绍(25) 杂谈(59) 管理页面   写新日志   退出   关于IDMer

 Blog信息
 
blog名称:IDMer (数据挖掘者)
日志总数:175
评论数量:848
留言数量:119
访问次数:2507739
建立时间:2005年6月24日

 日志更新
 

 我的相册
 

It's me!


 最新评论
 

 留言板
 

 链接
 

 联系方式

 日志搜索





 公告
“数据挖掘者”博客已经搬家,欢迎光临新博客网址:http://idmer.blog.sohu.com
我的新浪微博:
@张磊IDMer
 网络日志
M2005, the 8th annual Data Mining Technology Conference
数据挖掘者 发表于 2005/10/24 16:34:23
转自:http://www.sas.com/events/dmconf/index.htmlM2005, the largest data mining conference in the world, kicks off in Las Vegas today(24Oct05) Attendees at M2005, held in Las Vegas Oct. 24 and 25, will have a forum for exchanging ideas with hundreds of data mining practitioners and nearly 40 of the most respected data mining experts in the world. The conference will feature keynote addresses and session talks introducing not only new theoretical concepts and models but the latest data mining applications and creative techniques. The following abstracts have been provided: Shawna S. Ackerman and Roosevelt C. Mosley, Jr., Pinnacle Actuarial ResourcesUse of Credit in Personal InsuranceFor financial institutions credit scores and scoring mechanisms continue to provide effective means to identify markets and assess risk. In the insurance industry the analogous mechanism, insurance scoring is often challenged by regulatory constraints. In this session two property and casualty actuaries will discuss the methods, results and challenges of using credit in three distinct regulatory environments: an unconstrained market, a limited use market and a prohibited use market. Pete Affeld, Sprint; Brij Masand, Data MinersUse of survival analysis in telecommunicationsThis presentation will discuss the availability and issues of data; flexible hazard versus empirical hazard; and scoring of the hazards to forecast churn scores. Denise L Best, Stacy Yehle, and Judi Field, HallmarkOne-to-One Marketing at Hallmark Cards, Inc.Determining a relevant message to send to the right consumer at the right time is the essence of the one-to-one marketing challenge at Hallmark Cards ... as it is with any company attempting to develop a meaningful, lasting and profitable relationship with its customer. Data mining techniques are essential to finding the right answers to "Who, What and When?" This presentation will address Hallmark's use of data mining techniques and approach to developing predictive models, including sample selection, gathering and deriving variables and back-end analysis. The real power is achieved by effectively integrating the various predictive models with marketing knowledge and constraints. The final goal is to implement an effective overall contact strategy with Hallmark's Gold Crown Card consumers. Alexander Black, Computer Sciences CorporationCustomer Intelligence: Growing Your Business by Getting the Most Out of Your Customer Relationships - CSC's 2005 Customer Intelligence Survey Findings - How are Companies Responding to the Demand for More Meaningful Customer InsightsThere are three dimensions of Customer Intelligence: Customer Information Integration: The ability to compile a 360-degree view of the customer and provide access to that information across the enterprise Customer Insights: Segmentation and Modeling: The ability to perform descriptive and predictive analytics on customer information Customer Insights Operationalization: The ability to act on customer on insights through sales, service and marketing channels CSC's Customer Intelligence Maturity Model examines practices in each of these three Dimensions and ranks capabilities from Basic through Distinctive to allow organizations to assess their ranking and target Advanced and Distinctive capabilities. In our 2005 Customer Intelligence Survey, we have interviewed global executives of Fortune 500 firms to determine how companies are refining customer insights and acting on them. In particular, we focused our questions on segmentation schemas, analytics organization models and tool selection and preferences. Our analysis highlights the importance of business models in the analysis process as well as the need to focus on the end customer for value-added information. In this session we will review the Customer Intelligence Maturity Model and then highlight our 2005 Survey findings and relate them back to the model. Tom Bohannon, Baylor UniversityData Mining Applications at Baylor UniversityThe purpose of this presentation is to illustrate how data mining techniques can be applied in higher education. Applications in all stages of enrollment management will be explained and illustrated. Questions addressed will be: Who are the most likely to enroll? Who are the most likely to be retained? Who are the most likely to graduate? Applications in the area of fund raising will also be presented. Questions addressed will be: Which donors will likely participate in the current campaign? Which donors will likely increase their annual donations? Which donors will likely decrease their annual donations? Which non-donors are likely to become donors? Thomas J. Bradshaw, Bank of AmericaA Statistical (and yet Non-Traditional) Approach to the Design, Optimization, and Analysis of Matched Market Testing Using Linear Models and Time Series Behavioral DataThe analysis of mass media advertising and similar market level activities is often based on a pre/post comparison of two Matched Markets. Experience at the Bank of America has demonstrated that because the market selection process has historically not been statistically and behaviorally based the results have frequently been inconclusive or misleading. This presentation describes a statistically based approach that we have developed that uses behavior based time series data and SAS linear model procedures to select the best test and control markets from a large pool of candidates; evaluate and optimize their discriminative sensitivity prior to the test; and analyze the post test results. The methodology is demonstrated using data from an actual matched market test. Examples of SAS code are provided. James Cappel, Central Michigan UniversityA Survey of Practices and Opinions about Business IntelligenceMany companies are still struggling with how best to position business intelligence within their organization and optimize their investments in data mining and BI. This study was conducted on behalf of the Central Michigan University Research Center and sponsored by The Dow Chemical Company, Ford Motor Company, and Eli Lilly and Company. This investigation involved a web-based survey of hundreds of business professionals from large organizations. The survey addressed various issues such as: the scope, structure, resources, drivers, and evaluation of BI practices within companies, as well as opinions about the perceived role, effectiveness, and importance of BI to companies. The results provide a baseline for organizations to assess their progress along the BI continuum and they raise thoughts about how companies may potentially improve their BI practices. Manoj Chari, SASOffer assignment with SAS Marketing OptimizationData mining models are often used to estimate profitability measures or response rates for marketing offers. SAS Marketing Optimization addresses a related marketing decision problem: Given a set of marketing offers, what offer(s) should each customer from a set of prospective customers receive, so as to optimize some aggregate measure of value while satisfying relevant business constraints, customer eligibility and contact policy rules. SAS Marketing Optimization translates this offer assignment problem into a mathematical optimization problem and solves it to determine an explicit assignment of offers to customers that achieves this goal. It produces reports that facilitate the analysis of how business constraints affect the optimal assignment and objective. The optimization problems that arise in this context are typically extremely large and are intractable for general purpose optimization solvers. SAS Marketing Optimization uses specialized large scale optimization and approximation tech niques to solve such problems to near optimality. Implementations of this more rigorous approach have resulted in significant ROI increases over traditional rule-based and ad hoc approaches to offer assignment. The use of optimization can greatly enhance the value and effectiveness of campaigns for organizations that use data mining models to estimate return or value at the customer or account level. Jay Coleman, University of North Florida; Allen Lynch, Mercer University; Mike DuMond, ERS GroupProb(it)ing the NCAA: Three Applications in College SportsThe NCAA Basketball Tournament selection committee annually selects the Division I men's teams that should receive at-large bids to the national championship tournament. Although its deliberations are shrouded in secrecy, the committee is provided a litany of objective and readily estimated team-performance statistics to aid in their decisions. Using a probit analysis on this objective team data and the committee's decisions from 1994 through 1999, we developed an equation that we call the "Dance Card" as a model of the Selection Committee's decision rule. We've used the Dance Card to predict at-large tournament bids with 94% accuracy for the six years since. We performed another probit analysis of the 256 tournament game results from 2001 through 2004 to develop a second equation we call the "Score Card," to identify and weight the team-performance factors that are related to who actually wins tournament games. The Score Card was used to make predictions for 2005, during which it was 72% accurate over all tournament games, perfect on all games in the Final Four, including the prediction of UNC over Illinois in the championship game, 67% accurate when it predicted an upset, and rivaled Las Vegas in predicting game winners. College football teams annually replace departing players with new players from high schools through a process known as recruiting. The mostly highly-sought recruits may choose from among dozens of high-profile schools, but even less-heralded high school players often consider four or five schools during the recruiting process. We constructed a probit model to predict the college destination of high school football players as well as to gauge the relative importance of factors such as distance, on-field success (ranking and championships), playing time potential, NCAA penalties, and whether the school is a member of a "BCS" conference. While the factors that ultimately affect the choice of college are more numerous and certainly less quantifiable than those that the NCAA Selection Committee uses when selecting teams to the basketball tournament, our models nevertheless successfully predicted the college selection of 61% of recruits from 2002 to 2004. We also successfully predicted the college destination for 65% of the top 100 recruits of 2005. The models may also provide useful insights into the effects on competitive balance arising from NCAA sanctions and the BCS bowl game system. Randy Collica, Hewlett-PackardEffects of Missing Data in Data MiningAnyone who has performed data analysis in preparation for data mining has eventually come across the issue of missing data. Missing data arises due to many different types of mechanisms such as survey respondents drop out and only answer a portion of a survey, company information for customer analysis is not available or can't be matched, experimental methods to collect gene expression data sometimes causes systematic missing values, and other similar problems in business or scientific applications. This presentation will look at the effect of missing data on certain data mining analyses and how imputation can be used as an alternative. SAS Enterprise Miner provides a data imputation node and Proc MI is a SAS/Stat procedure for creating multiple imputations (MI). I will look at clustering and its effect when some data elements are missing; comparing the techniques of imputing seeds of nearest cluster, Enterprise Miner imputation node, and Proc MI and when these techniques are best suited for your data mining project. A brief result of an internal imputation study demonstrates the value of statistically sound and plausible results. Jim Cox, SASMining Customer Feedback: How to get a competitive edge by understanding what your customers are telling youYour customers constantly give you valuable information that can help you understand them better --- through call centers, surveys, Web sites and chat rooms where they can post their complaints, and direct email correspondence. Sifting through and understanding this data is essential to gaining competitive advantage. Using some example data, this workshop will help you understand how to use Text Miner/Enterprise Miner to analyze this data and gain valuable insights. Nebahat D鰊mez, TURKCELLPredictive models to prevent prepaid and postpaid churnChurn, especially prepaid churn, can be very difficult to predict in the telecommunications industry. TURKCELL, as the incumbent with 13.5 million prepaid and 4.7 million postpaid subscribers, faces the following challenges: to decrease churn rate by developing targeted retention activities to estimate in advance risky subscribers in order to design proactive retention activities to understand the reasons for churn Our Data Mining Department produces monthly churn scores with the help of SAS Enterprise Miner for 3 predictive churn models, 2 of which are prepaid. With the help of this project, we were able to reduce the churn rate by 50%. This presentation addresses our approach to building predictive models, including defining churn, gathering and deriving more than 2.000 variables. We will also discuss the use of models within the big picture. Ed Gaffin, Walt Disney World ResortData Mining and Business Intelligence: The Foundation for Building Effective Marketing Models for a Highly Segmented Product SetThe challenge: How to transform resort operational data and distribute it as meaningful business information for use by all levels of marketing and financial managers. This presentation demonstrates how the CRM division of Walt Disney World has assembled a SAS intranet solution to deliver valuable guest information to the users desktops across the financial and marketing disciplines. Combining the insight of all stakeholders with the expertise of the modeling and analytics team improves our ability to create a series of predictive models that mostly efficiently target our resort marketing efforts. Ken Fritz, Ohio Department of Natural ResourcesMining for Resources - NaturallyThe Ohio Department of Natural Resources (ODNR) has used data mining for 5 years to help understand issues related to declining license sales. In the process we have learned that our business units within the agency share many of the same customers, but from an operational standpoint, we had failed to capitalize on it. We also discovered that many of our databases contain data suitable for data-mining activities, representing a wealth of potential actionable information This presentation will focus on the data mining efforts employed by ODNR to better understand its customers for marketing efforts as well as provide decision support for program administrators and operations managers. Johannes Gehrke, Cornell UniversityPrivacy-Preserving Data MiningThe digitization of our daily lives has led to an explosion in the collection of data by governments, corporations, and individuals. Protection of confidentiality of this data is of utmost importance. However, knowledge of statistical properties of private data can have significant societal benefit, for example, in decisions about the allocation of public funds based on Census data, or in the understanding of drug interaction based on the analysis of medical data from different hospitals. I will start by introducing privacy-preserving data mining and its applications, and I will show how background knowledge can lead to severe privacy breaches. I will then describe how proper modeling of background knowledge can avoid privacy breaches, and describe practical methods for privacy-preserving data analysis. Paolo Giudici, University of Pavia, and Silvia Figini, Bocconi UniversityBayesian Feature selection for the estimation of customer lifetime valueIn the talk we consider the problem of estimating customer lifetime value of customers, when a large number of features are present in the data. In order to measure lifetime value we use survival analysis models to estimate customer tenure, and actuarial methods to discount cash flows. One of the most popular dependency models in the analysis of survival data is Cox model. Cox model is well suited to infer the effect of covariates on the length of customer tenure. In a real customer lifetime value analysis, many potential explanatory covariates are available and, therefore, a selection is needed. A further choice to be made is in the proper classification and transformation of such variables. Our claim is that, when explanatory variables need to be selected and properly designed, stepwise model selection of a Cox proportional hazard model is not the best approach. We propose an alternative methodology for feature selection which allows to evaluate, in an exploratory fashion, the relative importance of each explanatory covariate, and of its alternative classifications. This methodology is based on a simple and effective Bayesian approach, that allows to determine the scores of the partitions associated to each covariate configuration and, consequently, to choose among them. A further advantage of the approach is that churn estimates are model averaged and, therefore, turn out to be less instable than what would be obtained with a stepwise selection approach. In the talk we will test and compare our approach on a case study, based on data from a television company that aims to predict churn behaviours and to establish appropriate retention actions. Nita W. Glickman, Purdue UniversityDevelopment of the Purdue University Companion Animal Surveillance SystemThe Purdue University Companion Animal Surveillance System was established with support from the Center for Disease Control and Prevention Center for Infectious Diseases in 2004 for the purpose of syndromic surveillance in companion animals to detect early evidence of bioterrorism. It has since been expanded to include early detection of emerging infectious diseases/zoonosis in companion animals before they affect humans and to study patterns of transmission of infectious agents in nature. Other uses include characterization of human environmental health hazards such as causes of cancer. The Purdue Companion Animal Surveillance System currently receives data from >450 Banfield veterinary hospitals and 15,000 veterinary hospitals that use Antech Laboratories in the US. The data is transmitted to Purdue University where a data warehouse is maintained. The incoming data is continually mined using commercial software (SAS and ESRI) and programs specifically developed to detect, analyze, graph, map, and disseminate, important information to veterinarians and veterinary public health officials. A web portal is currently under development to share information with the public health community in near-real time. This paper demonstrates the following uses of SAS technology in creating and mining the Purdue Companion Animal Surveillance System database: SAS ETL Studio is used to read in data, load data into SAS data files, merge data with existing files, and extract small data files used for individual projects; SAS Enterprise Miner and SAS Text Miner are used to model risk factors for diseases and environmental exposures; SAS/STAT and GIS-based software (ESRI) are used to identify space-time clusters of acute adverse health events in dogs and cats and to map them to specific geographic locations. A prototype web portal currently being developed, in collaboration with COMSYS, using SAS Information Delivery Portal and SAS AppDev Studio will allow Banfield veterinarians and collaborating public health and security agencies to query the Purdue Companion Animal Surveillance System in near real-time to identify unusual patterns of health events and forecast future trends. Specific examples to be presented include: tick activity and forecasting, heartworm preventive drug use and safety, rabies vaccination use and safety in dogs and cats, geographic distribution of leptospirosis infection in dogs, and the prototype web portal. Benton Gup, Mike Hardin, Michael Conerly - University of AlabamaBusiness Analytics Applied to Money Laundering DetectionThe Patriot Act makes it imperative for financial institutions to proactively monitor potential money laundering activities by their customers. Due to the volatile nature of these activities, the detection schemes must continually be updated. In this presentation, we present a typology of recent money laundering schemes. We will illustrate the process of developing alert mechanisms using analytical techniques. Richard Hackathorn, Bolder TechnologyAre Small Worlds Lurking in Your Data Warehouse?This presentation suggests a new approach - Associative Link Analysis - that analyzes the Enterprise Data Warehouse for 'Small Worlds' behavior. By focusing on the structural relations among key business objects (such as, customers or products), actions to promote or inhibit rapid growth in these relationships are suggested. The EDW is rich in its description of business reality. The goal is leveraging this richness to manage the complexity of your business. Presentation Outline: - Overview of Network Analysis of Complex Systems - Aristocratic-Egalitarian Distinction - Tipping Points and Managing Chaotic Behavior - From Star-Schema to Associative Graph - Graph Metrics and Design Objectives - Strategies and Tactics of Small Worlds - Implementation Issues David Hand, Imperial College, LondonWhat you get is what you want? Some dangers of black box data miningDescriptions of data mining tools, and examples of them in action, generally assume that the truth is out there, is fixed and invariant, and can be discovered, at least in principle. The fact is, however, that life is often more complicated than this suggests. Sometimes shaky assumptions cast serious doubt on one's conclusions. Sometimes the exciting knowledge nuggets are irrelevant in the context of the bigger picture. Even worse, however, sometimes the very data mining exercise itself has consequences which render its own results invalid. Data mining is a powerful technology for profit and progress, but careless use of any powerful technology can have serious adverse consequences. Glenn Hofmann, HSBCMarketing Strategies for Retail Customers Based on Predictive Behavior ModelsUbiquitous application of data mining in direct marketing has the opportunity and responsibility to increase profitability while easing the burden of unwanted advertising for consumers. Big retail chains typically have large customer databases, mostly consisting of store credit card holders. Differential treatment of customers is key to modern marketing. Retailers want to detect people with the potential to be loyal shoppers, and build mutually beneficial relationships with them, by means of programs the customers consider valuable. These may differ by segment. Likewise, on the other side of the customer spectrum, they want to avoid unprofitable "junk mail" to people who will not respond. Using SAS and classification trees, we developed several models to predict customer activity and loyalty levels. Starting with hundreds of demographic and account variables in addition to all item-level transactions for the past two years, we employed SAS to first create a large number of derived predictor variables. This time-consuming but critical step aims at constructing variables that are both robust and predictive of future behavior. We will present some innovations, such as regionalized and segmentized behavior variables, that can significantly improve demographics-based models. We obtain the ultimate list of predictor variables through a two-step selection process. The final model is a combination of trees, grown on bootstrap samples of the training data. We give several examples implementing the resulting individual scores in both marketing strategy development and customer targeting for mail and loyalty campaigns. The achieved increases in return on investment are truly eye-opening. It is our hope that these types of activity scores become as popular in direct marketing as credit scores have in financial services. Kevin Ikeda, SAICHelping the Best Get Even Better with Enterprise MinerAs one of the world's largest organizations, the United States Marine Corps faces complex challenges in efficiently and effectively managing a personnel force of over a quarter million with an annual turnover of 40,000/year. The Marine Corps turned to SAS Enterprise Miner to analyze its 3TB personnel data warehouse and transform long-standing habits into more effective policies. With benefits ranging from a measurable 400% ROI, to improvements in overarching policies, come learn how the Marine Corps employs SAS data mining so "the few, the proud" become even better. Robert Jenkins, AcxiomToday's business environment moves very fast and the ability to keep pace makes all the difference in determining a company's success? or failure. Companies must react to situations quickly, but more importantly, they must be proactive. To achieve success companies must be able to make sound decisions based on solid information resulting from scientific processes. In addition, the information on which the decisions are based must be actionable — and fast. Modern day military practitioners refer to it as getting inside the enemy decision curve. Fighter pilots, for example, use this philosophy to constantly maneuver for advantage in battle. In business terms, the enemy could be a competitor, the market environment, natural disasters, government regulation, etc. Making decisions faster than the external force and acting on those decisions before the situation changes. Data mining activities in support of business intelligence best practices are often plagued by an inability to access and link relevant data, complex multi-vendor IT architectures, inadequate supporting software, insufficient services, daunting security challenges and perpetually insufficient storage. As a result, business intelligence and CRM have frequently failed to deliver on their promises. Most companies are not short on ideas, strategies, or tactics. In fact, most companies have very capable staff that can anticipate and plan. However, a primary limiting factor in a company's success is the ability to pull together the necessary information, analyze it, and implement a solution fast enough. This translates into lost opportunity. As Sun Tzu said, "Opportunities multiply as they are seized." In an effort to "get inside the decision curve" and "seize opportunity," Acxiom has pioneered a technology that will change the way we look at business intelligence. For the first time, hardware, software, services and data will come together in a single GRID-enabled platform designed specifically to address the implementation and on-going support complexities that are common to every BI project. Acxiom's new solution, deployed behind the firewall, creates a citadel of knowledge and unprecedented in-house control over data and insight. By coupling the power of grid technology with advanced BI techniques and processes, Acxiom has unleashed the ultimate data mining weapon to accelerate and optimize decisioning across the enterprise. This session will focus on the vision of a new foundation upon which all future data mining and business intelligence efforts will be built. Dmitri V. Kuznetsov, Media Planning Group (MPG)Mediaphysics as statistical physics of mass-media effectsA statistical-physics approach to modeling and analysis of complicated social mass phenomena (marketing, economics and politics) is developed. A keystone of the approach is an analysis of population distribution between two or many alternatives: brands, political affiliations, or opinions. Relative distances between a state of a "person's mind" and the alternatives are measures of propensity to buy (to affiliate, or to have a certain opinion). The distribution of population by those relative distances is time dependent and affected by external (economic, social, marketing and so on) and internal (mean-field influential propagation of opinions, synergy effects, etc.) factors, considered as fields. Specifically, the opinion-influence field can be generalized to incorporate important elements of Ising-spin based sociophysical models (Sznajd and other approaches) and kinetic-equation ones. The distributions were described by Schr鰀inger-type equation in terms of Green's functions. The developed approach has been applied to a real media efficiency problem for a large company and generally demonstrated very good results despite of low initial correlations of factors and target variable. Choudur K. Lakshminarayan, HPStatistics in text mining and web miningIn this talk I'll present some of our research in the areas of text mining and web mining at HP. I'll discuss extensively analysis of textual data using statistical clustering and statistical classification using the vector-space models approach. Techniques considered include Gaussian mixture models, Na飗e Bayes, Support vector machines. I'll also present applications of bagging and boosting to improve classifier performance. In the second half of the talk, I'll present analysis of web usage patterns to classify anonymous on-line visitors as satisfied/unsatisfied customers using first-order Markov chains. Kim Larsen, Charles Schwab & Co.Practical Predictive Modeling With the Na飗e Bayes ClassifierPredicting the probability of a binary event given a set of independent variables is one of the most common modeling problems, whether it's risk management, marketing, or a medical study. One of the oldest and most simple techniques for binary classification is the Naive Bayes Classifier. Although based on the rather optimistic assumption of conditional independence between the predictors, Naive Bayes Classifiers (NBC) often outperform far more sophisticated techniques. In fact, due to its simple structure, the Naive Bayes Classifier is an especially appealing choice when the set of independent variables is large. This is because of the classic bias-variance tradeoff; the NBC tends to produce biased estimated class probabilities, but it is able to save considerable variance around these estimates. Despite these appealing features, the NBC is rarely used today. There are two major reasons for this, both related to the independence assumption. However, by relaxing this assumption these issues can be overcome, while retaining most of the appealing features of the NBC. I call this technique the Generalized Naive Bayes Classifier (GNBC). In this talk I will describe the basic details and justification behind the GNBC. I will also demonstrate %GNBC and %SCOREGNBC macros that can be used to fit and implement Generalized Na飗e Bayes Classifiers. Kasindra Maharaj and Robert Ceurvorst, Ph.D., Synovate, Inc.Data Mining-Based Segmentation for Targeting: A Telecommunications ExampleCompanies in every industry today must demonstrate return on investment for any new or existing product, service or marketing activity. Toward that end, they need to develop an integrated understanding of consumers' behavior and their needs, attitudes and demographics, and then leverage that understanding to target their offerings and marketing activities more efficiently and profitability. Advances in data mining over the past few years have allowed companies to answer increasingly complex questions in this regard. We at Synovate have used SAS since the 70's to continually develop SAS macros for many of our data mining needs and other statistical applications. This presentation demonstrates the results of a data mining process we call Targeted Segmentation, which produces segments that unify information of various types and from various sources to provide that integrated understanding of customers. In targeted segmentation, we combine the various types of information in meaningful ways to produce segments that are differentiated on all aspects. We focus on an example from the Telecommunications industry, where the company sought need- and benefit-driven segments that could be identified using only behavioral information and customer characteristics residing on their database. The segmentation we developed allowed our client to classify customers with over 90% accuracy in each segment. We have successfully deployed this segmentation process for clients in numerous industries, including financial services, telecommunications, healthcare, automotive, fast-moving consumer goods, and others. Debbie MeGee, David Shepard AssociatesThe Road to Profit through Data Driven Marketing: An Analysis of Advanced Customer - Information-Based Techniques to Drive Direct Marketing StrategiesMarketing professionals already know the importance of a customer database and are most likely doing some statistical modeling and analysis. But how do you really pay back the investment of building that database? Now you want to gain a greater understanding of advanced data-driven marketing techniques to really unlock the power of that customer data. Find out how to transform customer data into relevant information to drive a customized marketing strategy which recognizes the differences and the value of each individual customer. This session tells you how to align customer information with marketing decision-making processes to create a more profitable organization. You'll learn the best practices for using Customer Lifetime Value, Offer Optimization, Targeting & Tailoring and Strategic Segmentation to leverage customer value increase retention align products with customer needs drive marketing strategies increase customer satisfaction enhance return on investment This session will enable marketers to make customer data the foundation of strategic marketing decisions. Here's a sampling of what you'll learn: A working understanding of advanced data-driven marketing strategies that are proven in the marketplace. Examples and benefits of proven cases Best practices of customer-centric companies How to align the right product and offer with the right customer How to use advance testing techniques to tailor your customer offers When to use strategic versus tactical segmentation How to avoid the most common errors direct marketers make when implementing segmentation strategies What data and models you need to implement an optimization strategy What database capabilities are required to support advanced data-driven marketing techniques How to use customer information to enhance return on investment Specific topics covered include: Customer lifetime value How lifetime value is used in decision making Definitions and applications of Customer Lifetime Value Targeting and Tailoring Different methods of testing Using experimental design techniques to develop targeted strategies Ease of implementation versus economic returns Offer Optimization Aligning products with customer needs Solving for maximum profits while satisfying business rules Challenges with getting it right Calibrating your models Strategic Segmentation The difference between strategic and tactical segmentation and when to use each How to evaluate a segmentation study How ten statisticians can produce 10 different segmentations from the same data Common segmentation myths Making the segments actionable Best Practices What are the most advanced companies doing? The results of an industry survey on database marketing practices Customer Database The foundation for data-driven strategies Capabilities required to support advanced techniques Speed and flexibility required to access and use the data A checklist of minimum and advanced database capabilities David Montgomery, Poindexter SystemsAutomation of the Optimal Allocation of Online Advertising using SAS Enterprise Miner and SAS OR in an ASP applicationOften times in internet advertising the results of predictive modeling algorithms need to be updated or constrained to meet changing business objectives on-the-fly. Poindexter's Progressive Optimization Engine (POETM) uses decision trees generated by Enterprise Miner and inputs the leaf nodes of the output tree into a generalized profit maximizing non-linear programming formula to optimally allocate online advertising. Outcomes of Enterprise Miner and NLP are executed by a distributed content servicing platform and results are displayed in an ASP application using the latest advancements in SAS Graph and SAS ODS. Olivia C. Parr-Rud, OLIVIAGroup and SIGMA MarketingKey Steps for Effective Predictive ModelingAutomated modeling software systems that streamline the predictive modeling process are enabling data miners to develop sophisticated models for marketing, risk and customer relationship management. However, the model processing which is the main focus of the software systems, is a minor step in the whole process. The success of the model is highly dependent on the diligence in the steps leading to and following the model processing. Determining the objective is the first and most critical step. The modeler must consider the company objective as well as the data miner's ability to implement the final model. The next step is getting relevant, accurate data for the project. After the model is built, thorough validation is key to assure the model's performance. And finally, implementation must me flawless to insure the model's success. In this session, I will discuss these key steps and provide real world examples from a variety of industries. Justin Petty, Aspen Analytics, Inc.Proactive Customer Retention: Improving Loyalty One Customer at a TimeRetaining customers has become the focus of many companies' marketing efforts where attrition rates are on the rise. There are many ways to develop a retention strategy, yet one important component to all strategies is to determine the right customers to target. Building predictive scoring models to identify the customers with the highest risk of leaving (or churning) is a good first step. In the CRM world, predictive scores are often valued not just for their predictive power but also for the learnings derived from developing the model. By understanding what factors are driving customers to be unhappy with a service, their concerns may be addressed by matching retention offers to individual situations. This approach, through inbound and outbound call centers, have resulted in up to a 50% reduction in attritions rates. By creating customized reactive and proactive initiatives, the customer experience is improved and loyalty increased. Randy Rose, Computer Sciences CorporationCSC And SAS Solve The IPIA Fraud And Erroneous Payments Compliance ChallengeSince enactment, federal agencies have struggled to comply with the Improper Payments Information Act (IPIA). Problems include disaggregated and multiple sources of data, difficulty in identifying and understanding the different factors and drivers behind fraud and erroneous payments, and building comprehensive solutions that take into account the needed changes to people, processes and technology. CSC and SAS have partnered in development of a data-driven methodology to support federal agencies' compliance with the IPIA. CSC and SAS have brought together world class consultants, data miners and programmers into a comprehensive methodology called Resource and Inventory Optimization (RIO). RIO teams will support agencies in organizing key IPIA reporting data, identifying the drivers of fraud and erroneous payments, and establishing a centralized function to report improvements to Congress. The teams will also help agencies to design solutions to address IPIA-related issues using available people, re-designed processes and enabling technologies. The RIO approach is differentiated in that it delivers a cost-effective, analytically-driven solution within very short time-frames and provides a foundational platform for agencies to continue to expand sophistication of the process as resources become available. The RIO approach is collaborative as it leverages agency subject matter experts to deliver a comprehensive assessment of business processes, develops statistically valid business rules using agency data, implements necessary process changes and programming into legacy systems, and transfers ownership of business rules — including management, analysis and refinement — directly to the agency to minimize ongoing contractor dependence. RIO delivers agencies an implemented, wholly-owned, analytically-based solution to combat Fraud and Erroneous Payments and comply with the IPIA. David Salsburg, Consultant and retired Pfizer research fellowTen Percent is Not ...John Tukey once commented that, when dealing with large medical data sets, you can be quite sure that ten percent is not ... is not there, is not correct, is not in the units you think it is--whatever attribute you assume describes the individual entries, ten percent do not have that attribute. For large data sets assembled under more precise protocols, the amount that is not ... may be as low as five percent or even one percent. However, anyone who attempts to examine a large collection of data has to be prepared for the entries that are not ... This will be a description of some of the blunders that have occurred in the past when the analysts have assumed something that is not ... These include the-elderly-white-males-in-foreign-sports-cars fallacy, the ninety laboratory mice all dead "by accident" on the same day, and the remarkable uniformity of autopsy reports at London Hospital. There will also be a discussion of methods the analyst can use to identify things that are not ... especially when dealing with multiple gigabytes of data. Gregory Smith, WWFThe Road to Data Mining: A WWF Case Study on Business Intelligence in the Non-Profit Sector Joseph Somma, Sigma MarketingLifetime Value Segmentation in the B-to-B Market: A Comparison of ApproachesThis paper will explore the processes of developing a lifetime value segmentation schema for B-to-B marketing applications. Lifetime Value segmentation is the process of determining the net present value of a customer and is an integral step in the development of a robust, multi-level segmentation program. Since the inception of this approach, varieties of methodological alternatives have been developed. Some approaches have attempted to accommodate a risk-based modification, while others have attempted to place LTV within a broader marketing realm by integrating measures of retention, brand and customer satisfaction. The paper will review these techniques and provide a comparison using BtoB data. The paper will demonstrate how lifetime value is calculated using several methodologies: Traditional calculation employing a standard NPV model Calculations incorporating retention estimates as a deflator to retention value Calculations that integrate Lifetime Value with brand and service quality. The paper will compare the output from the three approaches and provide recommendations for their use and future research. Thomas Thomaidis and Koos Berkhout, Loyalty Management UKIntelligent automation: optimising one of Europe's largest and most complex direct mailing campaignsPutting together a direct marketing campaign with many product offers from one company is a tricky business; many combinations of products and offer levels with different appeal for each individual must be reconciled, while numerous interests have to be served amongst the brands themselves. The complexity increases when your products are different companies. Loyalty Management UK are endeavouring to automate the offer generation process for such a campaign by allowing multiple models to compete for positioning while adapting to operational and structural constraints. Addressing the relevance of the offering, choosing the appropriate discount level for the sensitivity of each customer while tuning your message to take an individual's knowledge level into account forms a multidimensional optimisation problem that calls for solutions across the breadth of CRM and Data Mining. This presentation will discuss how LMUK are addressing this challenge for their quarterly DM campaign which is generating 7 million different versions for 8 million households ? nearly 35% of the UK's population ? and in which close to 20 companies are represented, each with their own constraints. The insight gained from implementing an optimum solution for a multi-model environment with multiple stakeholders is invaluable for data-driven business analytics across a wide variety of industries. Bhavani Thuraisingham, University of Texas at DallasData Mining Technologie

阅读全文(5788) | 回复(0) | 编辑 | 精华

发表评论:
昵称:
密码:
主页:
标题:
验证码:  (不区分大小写,请仔细填写,输错需重写评论内容!)


站点首页 | 联系我们 | 博客注册 | 博客登陆

Sponsored By W3CHINA
W3CHINA Blog 0.8 Processed in 0.268 second(s), page refreshed 144776410 times.
《全国人大常委会关于维护互联网安全的决定》  《计算机信息网络国际联网安全保护管理办法》
苏ICP备05006046号