Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Child pages (Children Display)


General:

Rebellion Research:
http://www.rebellionresearch.com/
Great link - RR in the Press: http://www.rebellionresearch.com/press.html

Spencer Greenberg:

Alexander Fleiss:

Jeremy Newton:

Jonathan Sturges: [no longer working at Rebellion according to his FB]

-----------

Dealbreaker articles:
http://dealbreaker.com/tag/spencer-greenberg/

2006.07.02 BusinessWeek - Hedge Fund Toddlers
http://www.businessweek.com/stories/200 ... d-toddlers

Fleiss's mother, Karen M. Fleiss, manages hedge fund KMF Partners. Alexander cut his teeth when, as a 19-year-old, he wrote a trading program that alerted him to the shares of a bankrupt leasing company that appeared undervalued. With money left to him by his grandfather, Fleiss accumulated so much of the 37 cents stock that he had to file an ownership statement with the Securities & Exchange Commission. The shares soared to $3.70, and Fleiss plowed his winnings into Rebellion.

- According to this link the SEC ownership-filing-requirement comes into effect when you own 5% or more of a stock, and as of 2013 the minimum number of outstanding shares to be listed on the NYSE is 1,100,000 (although we don't know if the company was listed on the NYSE or some other exchange), and so if Fleiss bought 5% of 1,100,000 shares at $0.37 a share, that's a $20,350 bet (which sounds reasonable for a 19-year-old with a serious interest in investing). If the stock price then went up by a factor of 10, and if he sold all his shares at that high point, that would have made him a profit of $203,500 - $20,350 = $183,150. That seems very, very similar to Jesse Livermore's winnings of ~$10,000 by his late teens in the 1890s (~$268,000 in 2013 dollars).


2010.07.13 - WSJ - Letting the Machines Decide
http://online.wsj.com/article/SB1000142 ... 48080.html
- the book on the table at 2:50 in the video is The Intelligent Investor; it's upside-down. Click here for a pic of the same edition.

2010.07.15 - Business Insider - How Spencer Greenberg Would Explain Quant Investing To A 5-Year Old
http://www.businessinsider.com/spencer- ... old-2010-7

2011.10.22 - Greenberg Talk @ TEDxBlackRockCity
http://www.youtube.com/watch?v=GZ69g8LtZc0

- the health care bill is an example of a situation where there's a lot of uncertainty but people are like "It's crap!" or "It's the best thing ever!"
1:40 - when we encounter such complicated situations we try to simplify them to simpler yes-or-no questions, like "Do I believe in big government?"
1:50 - in order to capture the uncertainty we need to think in terms of probabilities
1:55 - probabilistic thinking is really important b/c applicable situations pop up all the time. examples: love, health, careers, the economy, the galaxy, the weather, the fundamental laws of the universe (quantum theory)
2:45 - three examples:
1st - suppose someone cuts you off when you're driving. If you think "what a jerk", that's dichotomous thinking. Probabilistic thinking goes "do i know why this guy did this? what's the probability that he did it b/c he was a jerk, and not because there was an emergency?"
3:40 - 2nd example: suppose you've been dating and haven't met a person you really like. If you think "ugh, it's hopeless!" that's dichotomous thinking. Probabilistic thinking goes "What's the payoff if I 'win', and what are the odds that I'll find someone I really like?" So it gives you a strategy to follow: meet lots of people to increase the speed with 
5:15 - 3rd example: suppose you read a study that a certain food die doubles the chance of cancer. Dichotomous thinking goes "whoa, this is dangerous". Probabilistic thinking goes, "Is the increase in risk worth the benefit of the food die?" If the odds of the cancer go from 1 in a million to 2 in a million, it may not be a big deal.
6:00 - summary of the benefits of probabilistic thinking:
1) it gives you a more accurate, nuanced perspective
2) it forces us to acknowledge the uncertainty in the world
3) it encourages us to adjust our belief's strength based on the strength of the evidence
main takeaway: the next time you're tempted to say "I believe X" or "X is true", instead ask yourself, "What's the probability?"


2011.10.27 - Greenberg talk - O'Reilly Media - Reason vs. Emotion: Two Systems at War?
http://www.youtube.com/watch?v=qGMg3w_FC3c
- he gives examples of how emotions and reasoning can get you into trouble
0:45 - he explains how these two systems are each useful at times
2:00 - he then points out that modern society is very different from the environment in which we evolved.
- reasoning can alter our emotions (eg not being afraid on a roller coaster)
- takeaway #1: don't reason when emotional.
- takeaway #2: your emotions contain useful information
- takeaway #3: you can tweak your emotional response to things.
- takeaway #4: reasoning can go wrong; by learning about it, you can improve it.

2011.12.14 - Self-Skepticism talk at Skepticon
http://www.spencergreenberg.com/2011/12 ... kepticism/

Main Ideas:
We may not understand ourselves:
1. Reassess your strengths and weaknesses
2. Question whether you understand your decisions
3. Doubt your beliefs about your beliefs
- He briefly gives 3 quotes that show that doubting yourself is an old topic: voltaire, bertrand russell, and socrates
- Self-skepticism encompasses 3 topics: str & weaknss, beliefs, and decisions
1:20 - 1. It often takes a directed effort to understand our strengths and weaknesses
2. Our beliefs are probably less accurate than they feel to us
3. We're often unaware of the real reasons for our actions
2:20 - He's going to describe his path to self-skepticism through 3 stories
2:35 - Cognitive behavioral therapy
- it makes an interesting claim: that when we're feeling emotional, we tend to distort reality (ie have irrational thoughts)
- it also claims that most of the distortions can be described in a list they've come up with
- examples: all or nothing thinking, jumping to conclusions, magnification
4:20 - 1st story - he was skeptical of CBT, so he tested it by writing down his beliefs when he was emotional, and checked them against the list of common distortions
5:25 - 2nd story - dealing with disagreement - he was debating a friend of his for a few hours on something and they couldn't come to a conclusion. He then had the thought, "If we're equally intelligent and both equally informed, why am I so confident that I am the one who is correct? If a third party came up to us and had to choose, why would they choose me over her?"
6:30 - there's a way out of the disagreement problem: have very reliable ways of arriving at knowledge
6:50 - here are some bad methods for arriving at beliefs: 1) believing whatever is pleasing to believe, 2) believe whatever your parents teach you, 3) believe whatever your gut tells you
8:00 - here are some good methods for arriving at believes: 1) deduction, 2) basic probability theory, 3) induction, 4) testing predictions, 5) bayes' rule, 6) using your gut (wisely), 7) disproving yourself
11:00 - he recommends checking out lesswrong.com, then talks about how he ended up realizing that a lot of his beliefs didn't have strong foundations
12:30 - he proposes the audience try writing down a few of their beliefs and then try asking themselves, "how did i come to believe this?"
12:50 - 3rd story - one day he decided to write a list of all the important beliefs that he had changed his mind about. he was surprised at how often he had changed his mind about his important beliefs, and this made him realize that he was very likely to change his mind in the future about his beliefs. this in turn made him worried about making decisions based on his current beliefs. and now the main idea: in order to guard against making bad decisions based on your current beliefs, you need down slightly downgrade the degree to which you believe everything you currently believe.
15:20 - he's going to talk about scientific studies that have been done on how humans make decisions. he describes one method where scientists ask people to rate themselves and then check whether those ratings could possibly be true.
he lists some classic statistics about how a disproportionately-large percentage of various groups will think they are in the top x%
17:20 - MI from the previous statistics: people often overestimate or underestimate their abilities. Both are dangerous: we may try things we can't handle or not try to improve, or we may give up easily.
19:10 - there are ways to avoid these biases: 1) we can try to rely on objective measures of skill, 2) we can force ourselves to search for our weaknesses, 3) we can seek out criticism from other people. here's an exercise: come up with a list of your own flaws, then ask a friend to add to the list, then ask your friend to hold you accountable for improving those weaknesses.
20:20 - 2nd types of studies - studies of HOW we make decisions. He describes the experimental method: split people into two groups and have everyone do the same task, where everything is the same except for one small difference, and then you see if the two groups behaved the same or not. If they behaved significantly differently, you may have reason to believe that it was the thing you had varied.
21:40 - example 1 - dating - What factors would affect whether you would agree to dance with someone? One study found that touching a woman briefly on the upper-arm doubled a guy's success rate from 10% to 20%.
23:30 - example 2 - interviewing - one study found no relationship between GPA / work experience and whether a college student was hired; what DID make a difference was whether the student tried to ingratiate himself with the interviewer. "we hire people we like, even at the cost of qualifications."
24:20 - example 3 - purchasing decisions - when given a choice between a web-only version and a print-and-web version, only 32% chose the print-and-web version. But when a third terrible option was added, print-only that cost the same as the print-and-web, 84% ended up choosing the print-and-web version.
25:30 - example 4 - criminal sentencing - in one study done on students, they gave half the sentence to a person who was more attractive. This is attributable to the "beauty bias".
27:00 - how do we counteract these problems? we need to stay on our guard. He then recaps the 3 points that he mentioned at the beginning of the talk.

2012.01.17 Interview with TradeTech
http://www.youtube.com/watch?v=uYSs2ic3_A4

2012.07 - EconomistMagazine Talk - How big data is reinventing global finance
https://www.youtube.com/watch?v=zr8hYqHj7sM
Q - Interviewer Kenneth Cukier
SG - Spencer Greenberg
MQ - Matt Quin, CTO of TIBCO
00:57 - Q: Finance is a black for most people, even those on Wall St. You guys are supposed to be the alchemists who know something about what's in that black box? So...what do you guys actually do for a living? Who are you, how are you different from people who do similar things? You're a "quant"; what's that?
SG: [...] We're different from most quant funds because we're trying to do fundamental investing instead of arbitrage.
2:30 - What's machine learning? A: Trying to teach a machine from datasets. Example: I have data on teas you've tried and whether you liked them.
4:00 - Q: ...but the machine's not actually learning, right? It's just doing statistical analyses.
SG: It's doing the same thing human brains are doing.
Q: Ok right, I'm just saying that SkyNet isn't going to take over.
SG: Right, this is narrow artificial intelligence. Something like SkyNet would be "general artificial intelligence"
5:10 - MQ: We help companies deliver the right info to the right place at the right time. Example: Is it better to know a customer is thinking about leaving a retail bank before they leave or after they leave?
6:35 - Q: (to MQ) How are you doing something special? What's happening in finance?
8:45 - SG: Different types of quant strategies can have different effects on the markets. For example, a quant strategy could try to anticipate what trades the irrational market participants will do and make those trades first; that behavior would exacerbate volatility. Another quant strategy could be to duplicate value investors like Warren Buffett, and buy when the irrational market participants are selling; that quant fund would be reducing volatility through their behavior.
9:33-10:20 - Q: During the flash crash many (all?) of the quant funds stopped trading because the people in charge of them panicked and pulled the plug when things got choppy (implication: that's not good). When's that going to happen again? (implied Q: that's going to keep happening, right?)
SG: The flash crash is as relevant to us as it is to Warren Buffett, but you've got to watch out for the high-speed trading quant funds because crazy stuff can happen when these things interact in complex ways. [implied answer: our fund is not going to pull the plug during flash crashes, and funds like ours won't cause problems like the crash itself or the drying-up of liquidity as the crash got worse]


2013? Greenberg talk - Occam's Razor applied to machine learning
http://blip.tv/xamdam/spencer-greenberg ... or-5730707
- The talk starts at 8:40, before that is just them setting things up (nothing of interest)
8:40 - What is Occam's Razor? A: The idea that, all else being equal, a simpler theory is preferable to a more complicated one.
9:40 - What is Occam's Razor applied to Machine Learning? A: All else being equal, simpler models are better than more complicated ones.
9:56 - Important questions to answer: What do we mean by "model", "all else being equal", "complexity", and "better"?
Two more questions:
1) Can we define these terms to make the razor true?
2) If it works, WHY does it work?
10:28-11:20 - Hypothesis - "A hypothesis is just a proposed description of data." Example1: In regression / curve-fitting, f(x) = 2x + 3 can be a proposed description of data. Example2: In "classification" (not familiar with that), you could have f(x1, x2) = sign(3x1 + 2x2 + 1/2) could be a hypothesis, where if the sign is positive you assign one label, and if the sign is negative you apply a second label. Example3: In a probabilistic setting you could use a probability distribution as a hypothesis.
11:20 - Hypothesis set - "A set of hypotheses, possibly augmented so that a probability is assigned to each hypothesis."
41:30 - Common misconceptions about Occam's razor
1)
2)
3) The 


2013.04.08 io9 - How Bayes’ Rule Can Make You A Better Thinker - feat. Greenberg
http://io9.com/how-bayes-rule-can-make- ... -471233405



  • 2013.06 - Automated Trader Magazine (paywall w/ preview)
  • The Economist Magazine - Redefining Technology
    • Moderator: JK - Joe Kolman, Managing Editor of The Economist
    • Panelists: 
    • SE - Steve Ellis, EVP of Wells Fargo
    • JB - Jeffrey Bell, CEO of Lime Brokerage
    • AF - Alexander Fleiss, President of Rebellion Research
    • 2:45 - Q to JB: How do you help your customers (startup hedge funds) control risk?
    • JB: That's actually a core part of what we do. We use buying-power checks and regulatory checks. The trick is being able to do those checks very quickly.
    • 3:35 - Q to JB: How fast is "fast"?
    • JB: Our fastest check is 200 nanoseconds.
    • 4:16 - Q to JB: What's the advantage of being faster to an automated trader?
    • JB: You can take advantage of 
    • 4:45 - Q to JB: Why would a bigger player (eg Morgan Stanley) need another risk-management system from you?
    • JB: You get another, independent opinion. And the benefit of our performance. They can separate their time-sensitive operations from their corporate systems.
    • 5:35 - Q to JB: What problems do smaller clients have?
    • JB: Smaller clients want the consultation.
    • 6:10 - Q to JB:
  • 2015.11.25 - YouTube - Spencer Greenberg - Talk at Effective Altruism 2015
    • MI of the talk: How can we find an idea for what to work on?
    • Come up with an impossible idea, or undiscoverable one, or a stupid one, or a tiny one.
    1. Impossible until now.
    2. Undiscoverable to others.
    3. Stupid sounding.
    4. Tiny at first.
    • 4:50 - The 'impossible until now' framework. "As new technology comes out, there's a lag time between when it comes out and when it starts getting used for new purposes."
      • 5:50 - Example: Machine learning applied to anything.
      • 6:50 - The two steps: 
        1. Learn a promising new technology really well.
        2. What industries has this not been applied to?
    • 7:15 - Undiscoverable to others: If only 1 in 80,000 people knows X, and only 1 in 80,000 people knows Y, and knowing both of those things makes it possiyou could be the only person on the planet
      • 8:20 - Example: A person who knows a lot about programming language design and a lot about cloud computing.
        • Steps involved:
          1. Pick two unrelated topics you know a lot about.
          2. "What does combining them make possible?"
        • 9:00 - He created a computer program that randomly generated pairs between things that he knew to see what kinds of connections he could make.
    • 9:10 - Stupid sounding - You know some secret (popularized by Peter Thiel)
      • Example: Wikipedia
      • Steps involved:
        1. What's an important truth that you know that few do?
        2. What seems dumb unless you know this truth?
    • 11:00 - Tiny at first - Solve a problem that you and your friends have, and then generalize the solution to solve its larger variant.
      • 11:50 - Example: craigslist was born out of an email list, and then it gradually grew.
      • Steps:
        1. Tackle a problem had by a small group you understand.
        2. Slowly expand the features for a larger and larger group.
    • 13:30 - Chance of success = quality of idea x time commitment x ability to build the product x propensity from feedback x skill at securing funds x relentless pursuit of success x sales and marketing skill x leadership x random chance.
  • 2016.03.31 - LifeHacker - How Your Mistakes Can Make You A More Rational Person
  • 2016.05 - Doctoral Thesis - Machine Learning at Extremes
  • 2016.05.06 - TiEcon - Four Eyes: a simple framework for understanding major problems in the world.
  • 2016.05.17 - ooNee Studios - Spencer Greenberg's Video Interview at TiETV
    • He gives a basic explanation of what he's doing at clearerthinking.
    • "We have over 20 programs"
  • 2016.09.20 - YouTube - TiE Global - Spencer Greenberg

...