"The Peculiarities Of Volatility" by Dr. Ernest Chan, Managing Member, QTS Capital Management, LLC
Ernie will explore some interesting features of both realized and implied volatilities that are useful to traders. These include the term structure of volatility, simple methods of volatility prediction, and what volatility and its siblings can tell us about future returns.
When Should You Build Your Own Backtester? by Dr. Michael Halls-Moore, founder of QuantStart.com
The huge uptake of Python and R as first-class programming languages within quantitative trading has lead to an abundance of backtesting libraries becoming widely available. It can take months, if not years, to develop a robust backtesting and trading infrastructure from scratch and many of the vendors (both commercial and open source) have a huge head start. Given such prevalence and maturity of the available software, as well as the time investment needed for development, is there any benefit to building your own?
In this talk, Mike will argue the advantages and disadvantages of building your own infrastructure, how to develop and improve your first backtesting system and how to make it robust to internal and external risk events. The talk will be of interest whether you are a retail quant trader managing your own capital or are forming a start-up quant fund with initial seed funding.
"The Holy Grail of Investing: From Theory into Practice" by Justin Lent, Director of Fund Development at Quantopian and Dr. Jessica Stauth, Vice President of Quant Strategy at Quantopian
Diversification has been called the only free lunch in investing. However in practice, the process of identifying uncorrelated returns streams is typically quite expensive both in terms of capital and time invested. This talk will explore a novel and economical approach to identifying uncorrelated alpha at scale.
In addition to a review of tools and techniques, Jess and Justin will share performance results from proprietary trading allocations to algorithms sourced from the online community.
"A Guided Tour Of Machine Learning For Traders" by Dr. Tucker Balch, Chief Scientist at Lucena Research, Professor at Georgia Tech
You’ve probably heard about Machine Learning and you likely know it is of emerging importance for trading and investing. Unfortunately it is a deeply technical field and the complexity and jargon get in the way of broader use and understanding. There are literally hundreds of learning algorithms that each solve a slightly different problem. Which algorithms really matter for investing? In this presentation, Professor Balch will help declutter the ML jungle. He’ll introduce a few of the most important ML algorithms and show how they can be applied to the challenges of trading.
"Trade Like A Chimp! Unleash Your Inner Primate" by Andreas Clenow, CIO Acies Asset Management
It is a long established fact that a reasonably well behaved chimp throwing darts at a list of stocks can outperform most professional asset managers. It is less known why this is the case. While there would be obvious advantages with hiring chimps over hedge fund traders, such as lower salaries and calmer tempers, there are also a few practical obstacles to such hiring practices. For those asset management firms unable to retain the services of a cooperative primate, a random number generator may serve as a reasonable approximation of their skills.
The fact of the matter is that even a random number generator can, and will, outperform practically all mutual funds. Such random strategies may seem like a joke, and perhaps they are, but if a joke can outperform industry professionals we have to stop and ask some hard questions.
When designing investment strategies, it can be very useful to have an understanding of random strategies, how they work and what kind of results they are likely to yield. Given that random strategies perform quite well over time, they can act as a valid benchmark. After all, if your own investment approach fails to outperform a random strategy, you may as well outsource your quant modeling to the Bronx Zoo.
"Quantitative Trading in the Eurodollar Futures Market" by Edith Mandel, Principal at Greenwich Street Advisors, LLC.
Although the Fixed-Income market overall still lacks liquidity and overall transparency, the Eurodollar futures are a very liquid and accessible portion of it. Eurodollar market is defined by a set of key features: pro-rata matching, large tick size, overlapping and highly correlated set of contracts, hidden implied liquidity and sticky price quotes. We will describe methodologies suitable for dealing with the market's complexity, making the case that high-frequency market-making, alpha trading & algorithmic execution need to be linked closely to achieve continued success.
"Systematic M&A Arbitrage" by Yin Luo, Managing Director & Global Head Of Quantitative Strategy, Deutsche Bank
The profitability of risk arbitrage critically depends on two key factors: how long it takes to close the deal and the probability of deal closing on its original terms. We built a logit model to predict the probability of deal closing and a survivor model to analyze deal closing time, using both deal-specific data and traditional quantitative signals. The deal time/probability adjusted M&A premium is far more precise than the traditional premium. Our systematic M&A portfolio significantly outperforms the traditional risk arbitrage strategies.
"Latency in Automated Trading Systems" by Dr. Andrei Kirilenko, Director of the Centre for Global Finance and Technology and a Visiting Professor of Finance at the Imperial College Business School
Time in an automated trading system does not move in a constant deterministic fashion. Instead, it is a random variable drawn from a distribution. This happens because messages enter and exit automated systems though different gateways and then race across a complex infrastructure of parallel cables, safeguards, throttles and routers into and out of the central limit order books. Understanding latency means you are eating lunch rather than being someone else's lunch. Add to it market fragmentation and you get a pretty complex picture about the effects of latency on price formation.
"Man versus Machine: Battle of the Strategies" by Dr. Lisa Borland, Portfolio Strategist at Cerebellum Capital
We have access to a large set of performance data of algorithms generated on the Quantopian
platform (‘Man’), entered into the Quantopian competition and hence achieving a minimum in-sample
set of performance statistics. A large set of similarly well-performing algorithms were discovered by
the computer using our proprietary learning framework (‘Machine’). We explore the statistical features
and out of sample performance of these two data sets to see in which ways they are similar, and how
they differ. One interesting question is, which method is best for sourcing novel, uncorrelated trading
ideas. And the winner is ….
"Improving Predictability of Oil via Reuters News Text" by Dr. Sameena Shah, Director of Research and the Head of the Research and Development New York Lab for Thomson Reuters
Traditionally, commodities futures models incorporate metrics like inventory numbers, supply demand numbers. While supply chain disruptions, outages and other significant events play a crucial role in the spot and futures prices, however modeling them is not trivial. In this talk Sameena will talk about how her team captured significant events from news and modeled their impact on oil futures returns.
"Statistics: The Missing Link between Technical Analysis and Algorithmic Trading" by Manish Jalan is the Managing Partner & Quantitative research head at SGAnalytics and consultant with Dun and Bradstreet, The National Stock Exchange of India, and Bank of America
Trading leveraged derivatives using only technical analysis or speculative analysis can lead to windfall losses for even the most disciplined trader and investor. Statistics are often an ignored area of work when it comes to derivatives trading. Our talk shall focus upon how volatility can be used for dynamically adjusting the stop losses. It will talk about how correlation is an essential method to diversify the class of derivatives being traded or hedged. It will focus on co-integration as a key method to distinguish a mean reverting time series to a non-mean reverting time series. It will touch upon other essential time series econometrics like OU process, VRT as well as statistical tools like PCA, ARCH, GARCH etc. which are essential for derivatives pricing and forecasting the volatility.
"Trading Strategies Based on Impact of Macroeconomic Announcements" by Dr. Alec Schmidt, Lead Research Scientist at Kensho
We examine returns of several US equity ETFs on the days of major US macroeconomic announcements and compare performance of the buy-and-hold strategy (B&H) with three different strategies that realize daily returns on the announcement days. We show that these strategies may outperform B&H.
"Fast and Smart: A Paradox in Quantitative Trading" by Christina Qi, Co-founder and Partner at Domeyard
Domeyard is a hedge fund focused on ultra low-latency trading. This talk presents a common paradox in the context of quantitative trading and how advancements in technology can confront this problem. We will also discuss what it's like to work at an HFT hedge fund.
"All that glitters ain't gold: Comparing backtest and out-of-sample performance on a large cohort of trading algorithms" by Dr. Thomas Wiecki, Lead Data Scientist, Quantopian
“Past performance is no guarantee of future returns”. This cautionary message will certainly match the experience of many investors. When automated trading strategies are developed and evaluated using backtests on historical pricing data, there is always a tendency, intentional or not, to overfit to the past. As a result, strategies that show fantastic performance on historical data often flounder when deployed with real capital.
Quantopian is an online platform that allows users to develop, backtest, and trade algorithmic investing strategies. By pooling all strategies developed on our platform we constructed a huge and unique data set of trading algorithms. Although we do not have access to source code, we have returns and portfolio allocations as well as the time the algorithm was last edited. This allows us to compare returns over the period the author had access to and potentially overfit on, as well as true out-of-sample data that accumulated since then. In this talk I will shed light on the prevalence of backtest overfitting and debunk several common myths in quantitative finance based on empirical findings. Moreover, I’ll show how I trained a machine learning classifier on this dataset to predict whether an algorithm is overfit or not and how its future performance will likely unfold.
"Machine Learning at Bloomberg" by Gary Kazantsev, Head of the Machine Learning group at Bloomberg
In this talk, we will discuss the evolution of the machine learning landscape from the perspective of the global financial industry. We will describe the development route of several Bloomberg machine learning projects, such as sentiment analysis, prediction of market impact, novelty detection, social media monitoring and question answering, illustrating the applications with recent results from strategy development using news analytics. We will show that these interdisciplinary problems lie at the intersection of linguistics, finance, computer science and mathematics, requiring input from signal processing, machine vision and other fields. We will talk about the methods, problem formulation, and throughout, talk about practicalities of delivering machine learning solutions to problems of finance, emphasizing issues such as appropriate problem decomposition, validation and interpretability. We will also summarize the current state of the art and discuss possible future directions for the applications of natural language processing and machine learning methods in finance. The talk will end with a Q&A session.
"Honey, I Deep-Shrunk the Sample Covariance Matrix!" by Dr. Erk Subasi, Quant Portfolio Manager at Limmat Capital Alternative Investments AG
Since the seminal work of Markowitz, covariance estimates has prime importance for portfolio construction. Running naive portfolio optimizations on sample covariance estimates can be hazardous to the health of one's portfolio though. The recent developments in machine learning, in particular in deep-learning, suggest that high-level abstractions and deep architectural representations are key for success when dealing with non-linear, noisy real-life data. Motivated by this, here we demonstrate a novel form of robust-covariance estimation based on the ideas borrowed from deep-learning domain. In a pedagogical setting, we will show how to use TensorFlow, a recently open-sourced deep-learning library by Google, to build a robust-covariance estimator via denoising autoencoders.
"Combining the Best Stock Selection Factors" by Patrick O'Shaughnessy, a Principal and Portfolio Manager at O’Shaughnessy Asset Management (OSAM)
Patrick will explore how to combine the value factor with other stock selection factors to build a superior stock selection strategy. He will discuss unique ways of using momentum, share buybacks, and quality factors to improve on a simple value screen. He will discuss portfolio concentration, rebalancing, and risk management. He will also explain why the best versions of these strategies are only possible for smaller firms and investors.
"Intro to Data Analysis in Python" by Anita Raichand, Data Scientist and Author of "Practical Data Analysis with Python"
Interested in learning how to use code to analyze data? The workshop participant will learn exploratory data analysis using open data and the Python programming language. Data preparation and visualization will also be covered. This workshop is suitable for people new to coding in Python as well as spreadsheet gurus. This is a hands-on workshop so please bring a computer. Software requirements will be provided prior to the event.
"The Sustainable Active Investing Framework: Simple, But Not Easy" by Dr. Wesley Gray, Founder of Alpha Architect
To some, the debate of passive versus active investing is akin to Eagles vs. Cowboys or Coke vs. Pepsi. In short, once our preference for one style over the other is established is can become so overwhelming that it becomes a proven fact or incontrovertible reality in our minds.
We cannot overemphasize that alpha in the market is no cakewalk. More importantly, being smart, having superior stockpicking skills, or amassing an army of PhDs to crunch data is only half of the equation. Even with those tools, you are still only one shark in a tank filled with other sharks. All sharks are smart, all sharks have a MBA or PhD from a fancy school, and all the sharks know how to analyze a company. Maintaining an edge in these shark infested waters is no small feat, and one that only a handful (e.g., we can count them in one hand) of investors has successfully accomplished.
In order too achieve sustainable success as an active investing, one needs both skill and an understanding of human psychology and market incentives (behavioral finance). We start our journey where mine began: as an aspiring PhD student studying under Eugene Fama at the University of Chicago. Let the adventure begin...
"A Vision For Quantitative Investors in The “Data Economy" by Michael Beal, CEO of Data Capital Management
Quantitative Investors have long been charged with an exhilarating challenge - to derive insight from data. To support this ardor, a plethora of traditional data and technology vendors have entrenched themselves as critical partners in our pursuit of Alpha.
Over the last decade, a new partner in the pursuit of “automated truth from data” has emerged. Billions of dollars in Venture Capital funding have created an ecosystem of “Big Data”, “Cognitive Intelligence”, “Cloud Technology”, etc. companies seeking to extract information from anything and everything (e.g. unstructured text, sensors, satellites, etc.). This “Data Revolution” began in California and is now blossoming globally.
As “Silicon Alley” brings financial technology to the mainstream, what new opportunities await the ambitious? What disruptions threaten the complacent? And which historical analogs best illuminate the path forward for Quantitative Investors in the “Data Economy”?
"Market Timing, Big Data, And Machine Learning" by by Dr. Xiao Qiao, Finance PhD at the University of Chicago and consultant for Hull Investments
Return predictability has been a controversial topic in finance for a long time. We show there is substantial predictive power in combining forecasting variables. We apply correlation screening to combine twenty variables that have been proposed in the return predictability literature, and demonstrate forecasting power at a six-month horizon. We illustrate the economic significance of return predictability through a simulation which takes positions in SPY proportional to the model forecast.
The simulated strategy yields annual returns more than twice that of the buy-and-hold strategy, with a Sharpe ratio four times as large. This application of big data ideas to return predictability serves to shift the sentiment associated with market timing.
"You Don't Know How Wrong You Are" by Delaney Granizo-Mackenzie, Academic Lead and Engineer at Quantopian
Quantitative finance is the only field in which the quality of your statistics is tied directly to your bank account. Subtle mistakes in statistical validation can cause models that look good historically to fall apart when actually traded. In this talk, Delaney will cover a few common issues faced when developing trading models, as well as introduce the Quantopian Lecture series.
"More Profit with Less Risk through Dual Momentum" by Gary Antonacci, Author of "Dual Momentum Investing: An Innovative Approach to Higher Returns with Lower Risk"
Gary will begin by reviewing the most common investment vehicles throughout history while explaining their advantages and disadvantages. He will then show how momentum can help accentuate the positives and eliminate the negatives. Using easily understood examples and historical research findings, Gary will show how relative strength momentum can enhance investment return, while trend-following absolute momentum can dramatically decrease bear market exposure. Finally, Gary will show how you can implement and easily maintain your very own dual momentum portfolio using the best assets classes.
In this talk you will learn how to:
- Spot the best investment opportunities in any market environment.
- Protect yourself from bear markets and behavioral biases.
- Construct your own low-cost, rules-based dual momentum portfolio that is simple to understand and easy to implement.
"Empowering Quantitative Investors In The “Data Economy" by Napoleon Hernandez, Director of Research and COO of Data Capital Management
The proliferation of novel data sources has awoken quantitative investors to the promise of “Big Data”. Billions of venture capital funding has created an ecosystem of companies to help investors extract information out of unstructured text, sensors, etc. A “Vision for Quants in the Data Economy” is nice, but what does it take to turn that vision into reality? Join Data Capital Management as we discuss some of the breakthroughs by companies like Twitter, Google and Facebook that are empowering quantitative investors to extract alpha from “Big Data."
"Deep Value And The Acquirer's Multiple" by Tobias Carlisle, Managing Partner Of Carbon Beach Asset Management, LLC.
How to beat The Little Book That Beats The Market: An exploration of the deep value investment strategy. This talk will combines engaging anecdotes with industry research to illustrate the principles and reasoning behind a counterintuitive investment strategy.
"From Backtesting to Live Trading" by Dr. Vesna Straser, an independent TCA, optimal trade execution and algorithmic trading consultant
Dr. Vesna Straser will discuss the differences in expected slippage between live trading, simulation trading and backtesting. Typically in backtesting signal generation and order fill assumptions are simplified to obtain strategy performance data faster. For example, many commercial back testing software providers will work with sampled data such as minute open or close price points and assume that the signal is triggered at the close of one bar and filled at the close price of the next bar, per the assumed slippage model. Simulation trading, however, will typically run on tick trading data (live or replayed) potentially resulting in quite different dynamics versus back testing. Orders are filled per fill assumptions that may vary significantly between different providers. In live trading, orders are triggered and executed immediately under real market conditions and order type. Depending on the trading strategy, live trading results can differ dramatically from back-testing and/or simulation trading. Vesna will outline the issues, analytics to track, factors to consider and how to account for them to achieve “realistic” back-testing results.
"Social Data's Influence on Financial Markets" by Chris Camillo, Co-founder and CEO of TickerTags
As mass adoption of social networks progresses the speed, reach, and mechanics of modern communication, the arc of data dissemination flattens greatly diminishing the value of conventional financial news flow.
The multiplicity of chatter that propagates through large social user communities presents an atypical opportunity to monitor the evolving landscape of products, technology, media, entertainment, culture, and news quicker and more efficiently than any conventional form of financial research. But how do we, as investors, analysts and journalists, discover actionable insights hidden within terabytes of non-financial news flow and unstructured social data?
Needle in the Haystack - Mining for Actionable Information in the Noisy Web by Anshul Vikram Pandey, Co-Founder and CTO, Accern Corporation
The amount of text data (news articles, blogs, social media etc.) on the web is increasing at a staggering rate. However, the amount of irrelevant information or noise on the web is increasing at a much higher rate than action-able information that can generate alpha. It is becoming increasingly difficult to mine for actionable stories on the web using standard, out of the box language processing techniques and libraries. Given that the performance, robustness and reliability of all data-centric models are directly dependent on the quality of the data, noise reduction becomes one of the most important steps in the data science pipeline. Thanks to the recent research advancements in the field of big data, deep learning and natural language processing technologies, we are now able to mine for actionable stories in millions of information pieces and hundreds of terabytes of data.
In this talk, we will highlight various approaches and technologies we employ as part of the noise cancellation mechanism at Accern. We will also compare the performance of trading strategies that use social analytics derived using standard versus sophisticated noise cancellation techniques, as well as those that utilize other advanced metrics.
"This Illegally Collected Data Set Produced More Alpha For Hedge Funds Than Any Other" by Leigh Drogen, CEO of Estimize
Analyst recommendations, ratings and price targets have been the focus of much consternation and ridicule from both the public and regulatory agencies over time. While there has been a lineage of academic and industry papers focused on the severe biases inherent in these data sets, and the effect of those biases on the accuracy and representativeness of the data sets, they continue to have a significant effect on the market due to severe availability heuristics at play with investor decision making. Quants have arbitraged this data and these effects to generate alpha. But for half a decade prior to January of 2014, several major quantitative funds had been collecting a different, secret data set from the sell side with a far superior design. This data set ended up producing more alpha for these funds than any other in recent history, until government regulatory bodies uncovered the illegal nature of its collection. This talk will focus on the genesis of this data set, how it was used, why it was so superior, and how you can get your hands on it soon.
"Lighting Up Your Dark Data" by Lance Ransom, a Product Manager at Continuum Analytics and a former Partner and CTO of Schonfeld Group
Quants are faced with a complex data environment. Data is everywhere and it's increasingly challenging to analyze, explore and evaluate, all in one language and in one environment. Quants need a unified environment where they are able to write expressions and conduct pushdown processes, all without having to move the data and having the ability to deploy anywhere, anytime. Organizations need to better marshal the data and have visibility to conduct a clean transformation. This session will discuss how businesses gain a better understanding of their data, leading to better results. In the FinServ industry, fluidity in understanding the data will help create better risk models and trading strategies. Ransom will discuss how organizations address these challenges and future proof their work.
"Machine Learning Based High Frequency Bitcoin Trading" by Arshak Navruzyan, Founder of Startup.ML
With a daily volume of thirty to fifty million US dollars and a market capitalization over five billion, Bitcoin is becoming interesting as a financial instrument for inclusion in a quantitative trading strategy. We will explore the unique issues of the various exchanges, impact of exogenous events and demonstrate a fully automated machine learning based trading system.
"After the Algorithm" by Daniel Schultz, Head of Partnerships at Robinhood
Although many quants are well versed in writing and deploying trading strategies, many might not be familiar with everything that goes on for trade execution after the algorithm is written. This talk will give you an overview of Robinhood's business, our approach to partnerships, and also cover the various intricacies of operating a broker dealer on multiple exchanges. Payment for order flow, dark pools, and trading securities listed on a different exchange than which it is being traded all contribute to a complicated environment. Daniel will cover the details of how Robinhood executes trades in order to offer those a better understanding of what happens after an algorithm is deployed.