Friday, December 25, 2015

Some Tutorials in Turkish


Some tutorials in Turkish (Turkce ders notlari)

AI, Computational Complexity (Yapay Zeka, Cetrefillik)

http://sayilarvekuramlar.blogspot.com/2015/12/bilgisayar-bilim-yapay-zeka.html

Multivariable Calculus (Cok Degiskenli Calculus)

http://sayilarvekuramlar.blogspot.com/2015/12/cok-degiskenli-calculus-multivariable.html

Computational Science (Hesapsal Bilim)

http://sayilarvekuramlar.blogspot.co.uk/2015/12/hesapsal-bilim-computational-science.html

Non-Linear Dynamics and Chaos

http://sayilarvekuramlar.blogspot.com/2015/12/gayr-lineer-dinamik-ve-kaos-chaos-non.html

Functional Analysis

http://sayilarvekuramlar.blogspot.com/2015/12/fonsiyonel-analiz-functional-analysis.html

Linear Algebra

http://sayilarvekuramlar.blogspot.com/2015/12/lineer-cebir-linear-algebra.html

Ordinary Differential Equations

http://sayilarvekuramlar.blogspot.com/2015/12/diferansiyel-denklemler.html

Partial Differential Equations

http://sayilarvekuramlar.blogspot.com/2015/12/kismi-diferansiyel-denklemler-partial.html

Statistics, Machine Learning, Data Analysis

http://sayilarvekuramlar.blogspot.co.uk/2015/12/istatistik-ve-veri-analizi.html

Time Series and Finance (Zaman Serileri ve Finans)

http://sayilarvekuramlar.blogspot.com/2015/12/zaman-serileri-ve-finans.html

Multiple View Geometry (Coklu Bakis Aci Geometrisi)

http://sayilarvekuramlar.blogspot.com/2015/12/coklu-baks-ac-geometrisi-multiple-view.html

Friday, December 18, 2015

Backtesting

For stock trading one usually needs a backtesting framework. I prefer Python; and here is a comprehensive list,

Link

Heard about this link from here - the author was announcing his own backtester.

I just played with pyalgotrade, and it looks good.

Monday, September 21, 2015

Python Code for the Algorithmic Trading Book

I converted some of the code for Dr. Ernie Chan's Algorithmic Trading book into Python. It is open-sourced here. Dr. Chan's mention about our project is here (the end of the post).

Wednesday, March 18, 2015

Data Science Done Well Looks Easy

Link

Data science has a ton of different definitions. For the purposes of this post I'm going to use the definition of data science we used when creating our Data Science program online. Data science is:

Data science is the process of formulating a quantitative question that can be answered with data, collecting and cleaning the data, analyzing the data, and communicating the answer to the question to a relevant audience [..].

A good data science project answers a real scientific or business analytics question. In almost all of these experiments the vast majority of the analyst's time is spent on getting and cleaning the data (steps 2-3) and communication and reproducibility (6-7). In most cases, if the data scientist has done her job right the statistical models don't need to be incredibly complicated to identify the important relationships the project is trying to find. In fact, if a complicated statistical model seems necessary, it often means that you don't have the right data to answer the question you really want to answer. One option is to spend a huge amount of time trying to tune a statistical model to try to answer the question but serious data scientist's usually instead try to go back and get the right data.

The result of this process is that most well executed and successful data science projects don't (a) use super complicated tools or (b) fit super complicated statistical models. The characteristics of the most successful data science projects I've evaluated or been a part of are: (a) a laser focus on solving the scientific problem, (b) careful and thoughtful consideration of whether the data is the right data and whether there are any lurking confounders or biases and (c) relatively simple statistical models applied and interpreted skeptically.

It turns out doing those three things is actually surprisingly hard and very, very time consuming. It is my experience that data science projects take a solid 2-3 times as long to complete as a project in theoretical statistics. The reason is that inevitably the data are a mess and you have to clean them up, then you find out the data aren't quite what you wanted to answer the question, so you go find a new data set and clean it up, etc. After a ton of work like that, you have a nice set of data to which you fit simple statistical models and then it looks super easy to someone who either doesn't know about the data collection and cleaning process or doesn't care.

This poses a major public relations problem for serious data scientists. When you show someone a good data science project they almost invariably think "oh that is easy" or "that is just a trivial statistical/machine learning model" and don't see all of the work that goes into solving the real problems in data science.