Showing posts with label econometrics. Show all posts
Showing posts with label econometrics. Show all posts

Friday, April 01, 2011

Best LEED for developing countries

Word goes that Portugal is likely to soon join the category of developing countries (as you might have heard). However, before you start sobbing, consider this: not all is doom. In fact, this could well be heaven for economists working with the famous Quadros de Pessoal longitudinal "linked employer-employee data" (LEED), and, eventually-- as I will try to convince you-- it would materialize in tremendous success in improving the state of the whole world. Bear with me.

Firstly, these researchers working with Portuguese LEED would then be able to sell their papers also as research in development economics (I am already working on convincing my co-author Miguel to place "economic development" as keyword in our couple of projects using that data). And you tell me if any other developing country can come up with data that beats Quadros de Pessoal! More importantly, just imagine not having to worry any longer about sample sizes, representative samples, non-response, measurement errors-- issues that typically plague development economics research; imagine how much more could be uncovered about the economies of developing countries, imagine the giant leap in the research progress on development, imagine finding solutions to all developing world problems, imagine all the virtuous circle! Isn't Portugal's sacrifice then just a very kind gesture to humankind?


-Inspired by Tiago, the most enterprising  Econ PhD student at Northwestern-

Thursday, March 24, 2011

On multicore Stata MP performance

Since I am in the middle of some very interesting discussions and experiments concerning this topic: it really does look like the difference in speed between the different multi-core/multi-processor Stata/MP versions is considerable. Right now I am performing a comparison exercise* with my friend and co-author Miguel, where, it turns out, a number of certain estimations with a 8-core Stata/MP 11 take about 10 almost 15 times less real time (I know, I hardly believe it myself) than (almost) the same estimations done with a 2-core Stata/MP 11. Taking the median across all estimations commands in Stata, an 8-core will outperform a 2-core by a factor of 2.28 (NB: this ratio appears to be larger than the price ratio of these two multicore versions!): I computed this figure from other stats available in this 250 pages report on Stata MP's performance.

The next quest should be to assess Stata's bold claim: "From dual-core laptops to the big iron of multiprocessor servers, Stata gets the most out of multicore systems. No other statistical software comes close" (my emphasis in bold). If true, that surely ought to boost Stata's status in the statistics/econometrics community (part of which-- I plead guilty too-- is currently infatuated with using other programming languages like Ox, Fortran, Gauss, Matlab etc, while typically leaving Stata for simple exercises or data manipulation).


*brief update footnote here, for clarification (well, at least part of the clarification..., the huge, factor 10 15, execution time differential remains somewhat puzzling): we cannot really perform this comparative exercise directly, as "(almost) the same" here means  that we do have  identical specifications (and the same software and operating system), but nonetheless different  (very large) panel datasets-- and in this particular case the difference in the structure (specifically, the "connectedness" of the cross-sectional-time series data, but the intention was not to give too many details) of those datasets matters, beyond an eventual difference in  the number of observations (for our purpose virtually the same); if we ran identical specifications on exactly the same dataset, with the 8-core and respectively the dual-core Stata/MP, the speed ratio could not be higher than 8/2=4, the theoretical limit. Given the type of exercise performed here, the real time differential due solely to the difference in core numbers is most likely close to this limit.

Tuesday, August 31, 2010

Econlinks: Of Maths, Efficiency, and Language

  • Last but not least: two ok obituaries for Tony Judt, one in The Economist and one in the NYRB.

Sunday, August 08, 2010

The Manski Critique

Chuck Manski's recent NBER working paper, "Policy Analysis with Incredible Certitude" (non-gated version) ought to be a must-read for anyone doing or interested in policy analysis.

The study is written in an accessible way, such that it can be in principle followed without explicit academic training in Economics/Econometrics (there are plenty of further references for the technical details), and essentially sums up some of Manski's conclusions from his well known research agenda on empirical methods in social sciences such as partial identification, and using decision theory with credible assumptions, for policy inference-- see for instance his books on these topics (which any applied econometrician should have on his/her shelf; though I confess, my copies are currently still in Aarhus, awaiting my shipping/bringing them to Chicago), Identification Problems in the Social Sciences (1995), Partial Identification of Probability Distributions (2003), Social Choice with Partial Knowledge of Treatment Response (2005), and Identification for Prediction and Decision (2007).

Manski catalogues the 'incredible analytical practices' and provides examples for each category. His set consists of "conventional certitudes", "dueling certitudes", "conflating science and advocacy" and "wishful extrapolation". I find particularly compelling the sections on the conventional and respectively the dueling certitudes, which are preceded by a concise introduction on the incentives for certitude (wherein, as usually, USA presidents' alleged statements always come in handy). Similarly, Manski pins down very well the "wishful extrapolation" practice, when often very strong, unwarranted, invariance assumptions are made (see his example on the selective incapacitation studies performed by RAND researchers in the early 80s, which gave rise to some hot political debates).

The one section that I find less thorough than the others in Manski's paper is the one on "conflating science and advocacy". Acknowledging that impartiality in social science (in fact, with some contextual caveats, this point is relevant for science in general) is the ideal, and that often research falls far from this ideal, I think Manski's pointing out that conflating science and advocacy is one of the main reasons for incredible policy analysis is absolutely correct. His illustration with excerpts from Milton Friedman's arguments for educational vouchers is also fine; indeed some of the crucial empirical evidence necessary for Friedman's stated policy implications in that context was (and still is, as Manski also states) missing, such as whether there are significant neighborhood (or peer) externalities involved (however, for making that point stronger, plenty of better, and/or more recent examples could have been used...). What I did not particularly fancy is the between-the-lines allusion that Friedman used to do this (i.e., conflating science and advocacy) on a frequent basis or, perhaps, all/most of the time. For instance, Manski states on page 20, "Milton Friedman [...] had a seductive ability to conflate science and advocacy" [...]. See Krugman (2007) for a broader portrait of Friedman as scientist and advocate". That particular NYRB article of Paul Krugman that Manski cites (presumably for other depictions of Friedman seductively conflating science with advocacy, and related-- line which is otherwise not followed up or further substantiated in Manski's paper) has however a considerable number of inaccuracies and misunderstandings, which others very well pointed out in subsequent articles, see for instance Nelson and Schwartz's reply to Krugman's; in fact, many would easily think that Krugman is himself guilty of conflating science and advocacy here (and elsewhere; some do actually substitute 'conflating science and advocacy' with 'ignoring science for advocacy' as practice in many of Krugman's NYT pieces- 1st bullet point...). And yes, ok: I wrote myself a post about that Krugman portrayal of Friedman, shortly after his article appeared. All in all, however, this minor point does not in any way diminish the essence of Manski's thesis; it's only that if it is about impartiality and professionalism in scientific practice (which Chuck Manski is one of the champions of, no doubt), we should make sure that is also the (only) between-the-lines message.

Saturday, March 06, 2010

Weekend econlinks: The quest for perfection

  • Gelman writes a useful overview on causality and statistical learning (caveat lector: I have only read through Angrist and Pischke's book, among the three Gelman mentiones; that one is very well written, but aimed at junior graduate students at best: hence, the book's tag "an empiricist's companion" is overselling it; and that has nothing to do with Josh Angrist kindly "advising" me to change my PhD topic/focus, sometime in my beginning graduate years, because 'nobody serious would be interested in structural modelling' :-)). I guess I would position myself more within the “minority view” set, represented here by Heckman (I wouldn’t say that is really a "minority" within Economics alone, by the way), but the usefulness of these debates cannot be questionned. And an outsider's (to Economics) opinion, such as Gelman's, is always more than welcome. Related, the WSJ talks about statistical time travelling to answer interesting counterfactuals; I have a feeling I'll stick to my structural guns for now...

  • The ubiquitous problem with such academic et al rankings (which I brought over and over, including in earlier posts and articles, particularly concerning the academic ranking obsession in Romania, where they also-- still! -- have problems understanding that a publication 'anywhere in ISI' can be total nonsense) is that they try to rank overall, ie. over all disciplines, often over (too) long periods of time etc. The only meaningful hierarchies in science are those done on specific disciplines and, even better, subdisciplines, and over shorter periods of time, thus revealing top new places etc. Then, inter alia, one would not be able to claim that biological sciences are advantaged, since there would be a within-discipline focus. I haven’t heard a single serious (but plenty of marginal) scientist(s) stressing the relevance of the rank of her/his university/institution over that of her/his department/research group. Politicians and journalists should take note, too.

  • Gastronomic sacrilège: where have all the great cheeses gone-- roquefort, camembert, brie de Meaux, Saint-Félicien, gruyère, comté, münster, pont l’évêque, cantal, reblochon, tomme de Savoie, crottin de chavignol?! Worse, together with the cheese, soon gone might be oysters, and epsilon common sense... Quo vadis, France?

  • The most exciting scientific upshot I've heard about in a great while: explaining the tip-of-the-tongue moments. It comes finally clear (although at this stage I understand it is still just speculative/conjectural, and needs more testing) why polyglots (such as I like to consider myself...) have more of a problem in remembering specific words than people who use a single language: “ […] this kind of forgetfulness is due to infrequency of use; basically, the less often you use a word, the harder it is for your brain to access it." Good, I will feel much better when invoking 'lapsus memoriae' next time :-).


  • How very true, though my feeling is that the battle for the brightest junior (and not only) Economists is far from over. It is sadly not Europe overall that might offer an alternative for European economists (not a chance: for starters, Europe needs to cut that embarrasing red tape where academics depend on useless, worthless, ridiculous bureacrats, and to think of attractive real wages... ), but Canada and Australia, which look more and more like worthy competitors to the USA (top; the bulk is way worse than pretty much anywhere in western Europe) places (related, earlier).

Thursday, December 03, 2009

Easterly on Randomized Evaluation

By far the best read of the current week (as yet, but looks incredibly difficult to surpass this):


Here’s an imagined dialogue between the two sides on Randomized Evaluation (RE) based on this book:

FOR: Amazing RE power lets us identify causal effect of project treatment on the treated.
AGAINST: Congrats on finding the effect on a few hundred people under particular circumstances, too bad it doesn’t apply anywhere else.
FOR: No problem, we can replicate RE to make sure effect applies elsewhere.
AGAINST: Like that’s going to happen. Since when is there any academic incentive to replicate already published results? And how do you ever know when you have enough replications of the right kind? You can’t EVER make a generic “X works” statement for any development intervention X. Why don’t you try some theory about why things work?
FOR: We are now moving in the direction of using RE to test theory about why people behave the way they do.
AGAINST: I think we might be converging on that one. But your advertising has not yet got the message, like the
JPAL ad on “best buys on the Millennium Development Goals.”
FOR: Well, at least it’s better than your crappy macro regressions that never resolve what causes what, and where even the correlations are suspect because of data mining.
AGAINST: OK, you drew some blood with that one. But you are not so holy on data mining either, because you can pick and choose after the research is finished whatever sub-samples give you results, and there is also publication bias that shows positive results but not zero results.
FOR: OK we admit we shouldn’t do that, and we should enter all REs into a registry including those with no results.
AGAINST: Good luck with that. By the way, even if do you show something “works,” is that enough to get it adopted by politicians and implemented by bureaucrats?
FOR: But voters will want to support politicians who do things that work based on rigorous evidence.
AGAINST: Now you seem naïve about voters as well as politicians. Please be clear: do RE-guided economists know something the local people do not know, or do they have different values on what is good for them? What about tacit knowledge that cannot be tested by RE? Why has RE hardly ever been used for policymaking in developed countries?
FOR: You can take as many potshots as you want, at the end we are producing solid evidence that convinces many people involved in aid.
AGAINST: Well, at least we agree on the on the much larger question of what is not respectable evidence, namely, most of what is currently relied on in development policy discussions. Compared to the evidence-free majority, what unites us is larger than what divides us.


Looks like Easterly's has very high chances to become the top blog among my econblogs (at least according to how often I dedicate entire blogposts just to cite his posts, e.g. here or here, or a WSJ article here). Not bad, not bad at all: I do have pretty high standards, as all of you should have noticed! :-).

PS. See also an earlier entry on the topic (featuring again some of the heavyweights in this realm): 4th bullet point.

Wednesday, October 28, 2009

Mating, development aid, and the econometrics of it all


I recently helped one of my single male graduate students in his search for a spouse.


First, I suggested he conduct a randomized controlled trial of potential mates to identify the one with the best benefit/cost ratio. Unfortunately, all the women randomly selected for the study refused assignment to either the treatment or control groups, using language that does not usually enter academic discourse.


With the “gold standard” methods unavailable, I next recommended an econometric regression approach. He looked for data on a large sample of married women on various inputs (intelligence, beauty, education, family background, did they take a bath every day), as well as on output: marital happiness. Then he ran an econometric regression of output on inputs. Finally, he gathered data on available single women on all the characteristics in the econometric study. He made an out-of-sample prediction of predicted marital happiness. He visited the lucky woman who had the best predicted value in the entire singles sample, explained to her how he calculated her nuptial fitness, and suggested they get married. She called the police.

Continue reading this brief masterpiece by Bill Easterly.

Sunday, April 19, 2009

Sunday night econlinks

  • Reviewing the reviewers, with cross-disciplinary insights. Who would have thought that Economics is somewhat like History? :-). Thanks to Daniel for the link!

  • The Economic Journal announces two interesting changes in terms of its submission/refereeing process, aiming at speeding up and raising the bar in the peer review process: i) Referees will not be individually paid anylonger, but the 10 best referees every year will each get a 500 pound prize. ii) In terms of submission process, submission of editorial letters and previous referee reports at other journals is encouraged. Read the whole letter. I think that whether i) works in speeding and improving the refereeing process is really an empirical quest, but ii) should be clearly adopted formally also by other journals.

  • Definitely a necessary debate within the structural vs. reduced-model economics context. I think/hope that this was just the warming up stage :-). Some extremely interesting contributions so far, all from top names in the Econ field: i) Marschak's Maxim nowadays; ii) Instruments of Development; iii) Better LATE than nothing. Not entirely on the same frequence, but somewhat of a prelude--third bullet point and earlier links therein.

  • We know quite a bit about the bad ones by now, e.g. from everyday's news... So time to hear more about the good pirates. (There should be of course no question about the best pirates ).

Google trends...

... to predict the present: an excellent short paper from a few days ago by Choi and Varian. This is incredibly interesting (and as you will read, working pretty well even in its simplest form!); I expect quite some new research to follow up on their call.

Sunday, March 15, 2009

Sunday morning econlinks

  • Incentives and globalization, a brief but very interesting interview with Luis Garicano. Topics tackled here are CEOs, football, and...everything else.

  • Finally, for those of us who have non-convex desires, you might also consider the girl's marginal benefits (the lyrics) :-). The latter is also my proposed song of the day. All together now, accompanying Mike Toomey and Julia Zhang (excellent stuff, ad majora!): "Cause girl your marginal benefits far outweigh your marginal costs/ Without our equilibrium baby well you know I'd be lost/ Trapped inside this market I need you to buy my love/ Girl without your complementing goods well I'm just not enough"

Friday, February 13, 2009

Econlinks for the weekend

  • It is a very important research topic, granted (and, my hunch has always been and continues to be that 'deliberate practice' explains most of the observed high achievements), but my feeling is that findings & methodology therein are so far overrated (and over-mediatized) and that much more research is needed to get a satisfying, not to say definitive, answer... One ought to welcome however the distinction between plain hard work throughout (the 99% perspiration...) and high productivity hours (the nap after lunch?...).

  • Gary Becker and Kevin Murphy with more words of wisdom: there's no stimulus free lunch. Murphy continues from his ideas put forward here. I guess the debate is really or mainly about the multiplier size; the fact that Becker and Murphy insist so much that it is much lower than the advocated 1.5 should get other economists paying more attention (including some European economists I know who also believe the effect of the multiplier might well be that large...)

  • "I think we economists love to speculate about heterodox theories when times are good and we feel free to discuss experimental alternatives to economic orthodoxy (and nobody is paying us much attention during good times anyway). But when the global economy is in free fall and everyone else seems ready to throw each and every Econ 101 principle out the window, we get desperate to save the core principles that lead to prosperity and development." Read more in Easterly's excellent piece on the economists' returning home.

  • Esther Duflo sometimes adventures in areas where she does not necessarily have a serious comparative advantange (see 3rd bullet point here for the area where one shouldn't start an argument with her...) I don't see how proper incentives (here disincentives...) can be given by imposing pay caps in the financial sector, what is, unfortunately, happening de facto now (at least if the respective financial institution receives governmental help). Philippon's point is well taken (and the co-authored research this is based on looks pretty sound), but he stops short from recommending any policy initiatives that would involve income ceilings, despite obtaining that financial guru's were paid too much. Au contraire, I think Posner and Becker ('at any level' is well worth bookmarking...) are the ones right in this context.

Thursday, February 05, 2009

Econlinks

  • Massively collaborative mathematics (via Terry Tao). I count myself an idealist when it comes to such ideas, just as the author of this post, and there are some well argumented points therein, but... my more recent economics background takes me back to earth. So here's one main reason (there are others, linked particularly to the nature of problem chosen to be solved by way of such "massive collaboration") why I think this will not work out (in Maths or any other science, for that matter, Econ included): the costs (particularly time and effort to follow such discussions, not to mention trusting the person-- if at all-- to monitor it all etc) would far outweigh the benefits. Unless the persons participating are far more efficient than the average (in time management & co) and, perhaps crucially, are not concerned with career building any longer... Somebody like Terry Tao perhaps, to keep it to Maths, though he does not seem overenthusiastic either :-).

  • Nature editorial on a "scientific responsibility index". I think some of these indices do not have to do so much with the aggregate, such as a country/nation dimension (for instance, it is in my opinion almost ridiculous to claim that researchers from/in a certain country ought to feel in any way tarnished by other co-national researchers'--could be from other fields, other times etc-- lack of ethics, and thus by an eventual 'country science ethics index'...), but otherwise the article is on the right track... Via Razvan, on Ad Astra.

  • George Soros, with an interesting (and financially very informative) FT article entitled "The game changer" (plus an account of how well his own financial operations fared). Obviously I do not agree with all his points, for instance one paragraph I do not fancy is the following: "As it is, both the uptick rule and allowing short-selling only when it is covered by borrowed stock are useful pragmatic measures that seem to work well without any clear-cut theoretical justification." In fact, I think it is precisely because we do not have clear-cut justifications and lent ourselves too much to "pragmatic" experimentation, of whatever kind, is why we ended up here. One new such 'pragmatic' rule is not necessary better than a previous such 'pragmatic' rule :-). Thanks to Paul for the link!

Thursday, December 18, 2008

The new RAE UK is out

...and for Economics and Econometrics, the quality profiles can be consulted here.

By strictly ranking percentages of research in the highest and respectively second highest research categories, the first 10 Econ Departments in UK RAE 2008 (not very unexpected) are the following:

  1. LSE
  2. UCL
  3. Essex, Oxford, Warwick
  4. Bristol, Nottingham, Queen Mary
  5. Cambridge
  6. Manchester

Tuesday, August 19, 2008

The next thing is to inquire whether the saints are listening...

The empirical conclusion from this analysis is important. A little prayer does no good and may make things worse. Much prayer helps a lot.


If Jim Heckman says that, it's gotta be true.
So, either stop praying altogether, or pray 24/24, nothing in between helps...

PS. Andrew M. Greeley's letter attached at the end of Heckman's paper deserves praise on its own.

Wednesday, April 30, 2008

Econlinks today

  • Here's the most ridiculous thing I've heard so far, within the academic publishing business: deliberately slowing things down by sitting a whole month on each submission before doing anything with it. Via Andrew Gelman. Something like this might well be practiced by more journals and in many fields ( and surely I am thinking mostly about my own field here...) than currently known: could explain a substantial part of (complementing the fact that referees are not easy to find and they might be slow themselves, see also here) the often exaggerated times before one gets back referee reports etc. I think it just shows the incapacity of those editors to function as editors, if that is the case. And obviously excessive crowding/queueing can be solved in this context, similar to many other contexts involving congestion, by raising submission fees (despite the apparent objection of some people, which Gelman also mentions, that people don't have the money for it-- give me a break, I'd say: if you are indeed such an underpaid academic, probably you can't produce the quality required for that top journal anyway; and for universities in places that really run low on budgets and remuneration in general, like Africa, Eastern Europe etc., some reduction or waiver could be in place).

  • "Econometrics: qu'est-ce que c'est?" or econometrics as taught at University of Michigan (where also other Econ professors seem to be very talented inasmuch as music is concerned) :-). I think many economics/ econometrics professors elsewhere could learn something from this-- it isn't for nothing that most students consider econometrics courses the most boring courses they (have to) take... Here's the academic website of the excellent performer above, in case anyone wants to contact him for advice in teaching.

  • Discussion on the merits of (further/ re-) regulating financial markets: Becker and respectively, Posner. I believe Posner is for some reason becoming too skeptical of the powers of the free market, so I'll strongly recommend only Becker's analysis this time :-).

  • Collected advice for young economists (via Tyler Cowen on MR), from senior economists. I have read all of these pieces before (though, unfortunately, haven't always followed the advice in there...), but it is excellent to have them in one place.

Monday, November 19, 2007

Replication in Economics

Here's my most interesting read in the last weekend, a recent article by Dan Hamermesh, published in the Canadian Journal of Economics, on "replication in economics". It is also downloadable as PDF from Hamermesh's site (Dan Hamermesh's website contains much more information, potentially useful for any economist, albeit junior or senior-- that as a remark for those of you who did not know about this excellent online resource...). But going back to Hamermesh's viewpoint in this context: the short article linked above is extremely interesting and informative. And it contains also a proposal ("modest proposal" in the text), which I also support as very welcome: the only way to see more replication studies done is to have them commissioned (to senior researchers, with tenure...) by journal editors.

While I'd advise any empirical economist (or better: any economist that is currently/ plans in the future to also do some empirical work) to read the paper, I'll put forward below some possible omissions in Prof. Hamermesh's article, which would have been particularly interesting for me:

a. What to do about registered (say, matched employer-employee) datasets that cannot be made publicly available due to confidentiality agreements (though clear steps for accessing them can be always provided as information; in practice, some of these might take too long etc. for the replication to be worth undertaking); the use of such datasets is increasing day by day, particularly within applied microeconometrics, hence I'd say they would merit some separate discussion. Caveat lector: although their case is somehow similar to that of any proprietary database, these administrative datasets do have one common characteristic, namely that usually they are provided by the official statistics bureaus of individual countries. This might suggest possible (ad-hoc) agreements between journals and such bureaus to allow replication of studies under the same very strict confidentiality rules as those the initial author had been subjected to.

b. There is no discussion in Hamermesh's paper about structural approaches: in those cases replication, as commonly understood (particularly "scientific replication" in Hamermesh's terminology), is typically not an issue (subject to correct coding and analysis, to start with). In fact, one way to avoid frequent requests for replication using other datasets and other time periods etc, would be to have a structural framework to start with...

Sunday, November 18, 2007

Econlinks for 18-10-'07

  • Car preferences of faculty at Harvard. They could not detect the owners of the Porsches, but the BMWs belonged mostly to the Econ faculty (also: "Of 18 respondents in the economics department, eight said they owned luxury cars—one of the highest percentages")... Now compare that with the Subarus owned by most faculty at other departments and tell me which Dep there has taste:-). Via Greg Mankiw.

  • Some (Japanese) econometricians have time to combine their haiku and econometrics knowledge :-). Here's one superlative result of that endeavour: "Econometrics Haiku" by Keisuke Hirano.

Wednesday, April 04, 2007

Econometrics: A few reasons to use Ox over Gauss

...though I am definitely going to try Gauss as well and learn it better than I know it at the moment (and I certainly think one should try many programming languages; each of them could have particular comparative advantages in specific routines etc.). Below I place some excerpts from some very interesting discussions on an ox discussion list that I consult quite frequently. I will leave the names of the authors out since everybody can trace the fragments on the list linked above (more precisely, we are talking about the archive for April, so here) and keep only what I consider the essential excerpts (I will also place links to the full messages at the end of the respective fragments ). They pretty much contain what I would have to say over the subject as well (obviously I am not so experienced as most of the researchers who answer here; as a personal note, I particularly like the references over the power of the BFGS log-likelihood maximization routine- that came clear, inter alia, in part of the research for a paper I co-authored, and of course the easy extension using C/C++, programming language which I learned far before I ever got to learn/do econometrics- ok, so I might be somewhat biased when preferring an object-oriented, modular, econometrics software :-)).


So, we start here, with the question:

"Dear Ox Users, I am a PhD student who tries to choose between Gauss and Ox. I have tried to ask some people which one to choose, I have read some documents to guide myself in chossing between the two, but still could not make myself clear on which one to choose. [...]" Link.

And some answers I particularly liked were:

"I think that trying to find "the best econometric software" is the wrong way to think about statistical/econometric programming nowadays. 15/20 years ago, people used to learn only one or two econometric software ,but I think that the things are very different now. Even if Gauss highly used in finance, both Gauss and Ox are great software, but sometimes Gauss is better and sometimes Ox is better. I simply depends on what you do. [...] If you are new to econometrics programming, I would say that Ox is easier to learn than Gauss, it is very powerfull (especially if you do arfima, markov models and panels) and it's free for universities." Link.

"[...]I think Ox has an excellent syntax based on C (with its extensions the most used programming language, there must be a reason). I think it is easy to read and to learn. Ox, if you want it, is also object oriented, and it is very easy to make reusable code or packages to share with others. There is not so much documentation on Ox as for Gauss for two reasons: 1) Gauss is still more widespread, 2) the official documentation of Ox is Excellent and public (on the web). Graphs in Ox just look better and are very easy to integrate into LaTeX documents. Ox is fast, but also Gauss is pretty quick. Ox may be extended in C. It is easy to make simple GUIs with Ox." Link.

"[...] 1) Gauss got there first, and is American. Hence it has a huge user base, and lots of code written for it. Alas, it is a poorly designed language. It's too unstructured, and it's easy to write incomprehensible spaghetti code. 2) Ox got there second, and is British. It has, inevitably, a more modest user base. It is, however, a beautifully designed language, combining all the power of matrix programming with the logical structure of C. (It is actually a lot simpler to write than C.) If you follow the recommended coding conventions (naming, indenting) then it is easy to write elegant, compact and self-documenting programs. It's got object-orientation and classes if you are into that, but you don't need to be. So - no contest frankly. If you want to learn a new language for your research, you will not regret starting with Ox. Try Gauss later when you are ready for the worst." Link.

"[...]I used Gauss for econometric programming in the past, and I have to say I did not like it. I oftentimes ended up with"incomprehensible spaghetti code". For starters, the language is not case sensitive, which bothers me. Ox is great. Its syntax is similar to C, it's fast, it has an excellent numerical library, and so on. Just try to do some Monte Carlo where you numerically maximize a log-likelihood function using BFGS in Gauss and Ox, and you will notice the difference. Additionally, please note that you can run Gauss code using Ox. Finally, Ox is free (for academic research and teaching), which is a huge plus in places like Brazil, where we don't have access to university wide site licenses. " Link.


PS. It should be said that, to be completely fair, one should also listen to the viewpoints of the people on a Gauss-users list :-). If anybody has such references, I'd be happy to link them here.

Monday, February 19, 2007

Best thing I've read in the last couple of weeks

....is John Rust's Comments on Michael Keane's "Structural vs. Atheoretic Approaches to Econometrics". Perhaps too strong in some parts (though I can understand why), but simply great overall (inter alia, possibly the best defence of structural econometrics I've seen so far- for sure the clearest and most concise one). I am still reading the (excellent so far) article by Keane (forthcoming in the Journal of Econometrics, see a working version here).

Thanks to Nicolai for pointing out to me Keane's paper and Rust's comments on it.

PS. Reading Rust's short paper mentioned above, one also has a better idea of which parts of Rubinstein's critic to Levitt and Dubner's 'Freakonomics' I agree with, to link with my previous post.