Wednesday, February 25, 2015

The Flipside of Brit, and Single Good Slippers


It wouldn't matter if it's the right or the left one that is usable he would find from a pair of broken slippers. That was in the seventies that a farmer in a village in the Shwebo Township, who happened to be the elder brother of one Dr. Ba Kyi of Mandalay University, told me of a history student who went around village to village doing research. When this student found a single good slipper from a broken pair he would put it in his shoulder bag for later use. Last week when I told this story to my retired friend in Mandalay, himself a graduate from the Mandalay University, this friend told me that Dr. Ba Kyi was a professor of math and that I must have met his brother in Chi-pa, a large village near Shwebo town. By the way, I don't remember if I had asked the name of the history student who saved broken slippers and forgot his name or if the farmer didn't know his name or if I hadn't asked at all.

A lot of people think Wikipedia is the flipside of Encyclopedia Britannica and "flipside" would mean inferior, if you like. For me Wiki is just another encyclopedia and it is just about the only encyclopedia I used for a long time. Before that I used to have a pirated copy of Encarta and it was fine. From the time both Google and Wiki existed, I looked in Wiki, then go to the links given there and then also looked in Google for more. This habit stuck because Internet makes it so easy for us to find information. In my experience for example, when I played around with "R" the free statistical software I found its output of results in scientific notation in some calculations really annoying. Well, I didn't try looking for the solution to get plain number output from R's 3400+ page reference manual. I looked for it on the Web and got instant gratification. As for the Wiki vs. Britannica issue, it never occurred to me that what's in Wiki would be weak and then to check that out with Brit when Brit becomes online. And I haven't even really tried its free online version with pop-ups and advertising.

This wandering of thoughts started when I chanced to watch a seminar organized by the Sarpay-beikman library with highly respected retired librarian of Rangoon University Library, U Thaw Kaung moderating, with two lady librarians as panelists. I happened to tuned-in that TV broadcast right in the middle and had seen only a part of it, but I could appreciate the efforts made by our librarians to move towards open-shelf access to the library, helping visitors, and in embracing new technologies like scanners, computers and the Internet. Then there was one practical advice by the moderator—not to cite Wikipedia as reference as it is not as reliable as for example, Encyclopedia Britannica.



Well, I was vaguely aware that there has been (and still is) such advice on the Web and elsewhere, yet I thought I have nothing to do with that because all I need is to look for some introduction to what I am interested at some moment and for that Wiki suits me well, and to be frank I don't want my cellular data connection ticking away too long and Wiki is something of a great one-stop shopping. However, now that I heard this advice from a venerable librarian, I take that's great advice at least for researchers and the academia.

It is understandably so, because, Britannica "is written by about 100 full-time editors and more than 4,000 contributors, including 110 Nobel Prize winners and five American presidents" (Encyclopædia Britannica, Wikipedia). All the same, I felt that for ordinary folks and guys like us, using Wiki would somehow be like collecting a single good slipper for future use. Unsavory may be, but not illogical.

Yet that is not the end of story because curiosity nagged me to look a bit further. Perhaps there somewhere under the skies was someone who perfectly fit the single slipper, not a Cinderella, but plain, strong, working class lass for our student. According to Wikipedia "The 2013 edition of Britannica contained approximately forty thousand articles, and by comparison to Wikipedia, was over one hundred times smaller than the current number of articles contained in Wikipedia - specifically, 4,714,319 articles in English (as of February 8, 2015)."  

On the other hand, Wikipedia allows any user to create and edit its articles. It seems obvious from that a bunch of nobodies however large in numbers would be no match for a band (4,000 strong!) of polished scholars, professionals and luminaries. However, that's a great topic to fight about and not only there has been heated debate about Brit vs. Wiki in specifics, but also battles on the wider fronts of crowd vs. experts, and the merits of open collaboration vs. traditional hierarchical structures.

Anyway, it may be particularly distasteful for the academia when someone says the taboo on Wikipedia is a problem of culture than other things (Students Should Be Allowed To Cite Wikipedia, Ellen Fishbein, The Observer, May 1, 2014):

You’re writing a paper, you Google your topic, and the search returns that first, magical article: the Wiki page. Perfectly free of tedious language, abstractions or scholarly euphemisms, it gives you what you need. You go to Encyclopedia Britannica; you look up the article on your topic. You see a few dry, grammatically correct lines interspersed with the same facts you saw on Wiki. “Good enough,” you think, and you footnote it.

Maybe you get fancy: Instead of looking for another encyclopedic source, you scroll down to Wikipedia’s list of citations. You open the pages, skim them for relevance and add them to your bibliography.
...
But the ban on Wikipedia in academic circles is not a problem of denial—it’s a problem of culture.

But the big scholars stay aloof while the masses and even people like the Supreme Court Judges of India (Indian high courts show continued enthusiasm for citing Wikipedia, Alok Prasanna Kumar, Live Mint, 10 February 2015) trust Wikipedia:

Wikipedia use remains fairly widespread among the high courts of India. No fewer than 18 out of 20 high courts in the last 10 years (excluding the newly constituted high courts of Manipur, Meghalaya, and Tripura) have referred to Wikipedia as an authority at least once in their judgements.

A detailed search of legal databases reveals that no fewer than 84 high court judgements that cite Wikipedia as an authority.

The question is why those big scholars didn't try improving Wikipedia by writing articles and by editing. As Neil Selwyn, a professor in the faculty of education at Monash University and the lead author of the Wikipedia study, noted (Wikipedia not destroying life as we know it, John Ross, THE AUSTRALIAN, February 11, 2015).

... if Education ­Minister Christopher Pyne wants Australians universities to have real impact, the best way would be to force professors to spend a week editing Wikipedia pages in their areas of expertise.
“In terms of making a real contribution to public knowledge, what better thing could we do?
“I go online and look at stuff in my area and it’s really shocking. But I have not been on there to make it any better, because I’ve got to write grant proposals and ­academic articles that no one will ever read.”

Ross summed up the findings of the Monash University Study that "students and ­academics alike are missing a giant opportunity to contribute to world knowledge by shunning the right to edit the world’s sixth most used website."

There were much criticism and complaints about the editing system of Wikipedia by outsiders as well as insiders and that may also be one reason why the big scholars choose to stay aloof.  According to Tom Simonite in The Decline of Wikipedia, MIT Technology Review, October 22, 2013 (http://www.technologyreview.com/featuredstory/520446/the-decline-of-wikipedia/#comments):

In the established model, advisory boards, editors, and contributors selected from society’s highest intellectual echelons drew up a list of everything worth knowing, then created the necessary entries. Wikipedia eschewed central planning and didn’t solicit conventional expertise. In fact, its rules effectively discouraged experts from contributing, given that their work, like anyone else’s, could be overwritten within minutes. 

One commentator who joined Wikipedia in late September 2013 as an editor posted his frustrations:

Craig_Weiler Oct 23, 2013
... The ideologues are organized into a well oiled machine and know just how to game the bureaucracy.   There is no hope of getting around them or getting Wikipedia to recognize this as a problem because they are embedded with friendly administrators working on their side.  They use "consensus" to gang up on other editors and push their point of view into the articles.  They control hundreds of articles this way.  Going up against them is massively inefficient and a complete waste of a normal human's time. ...

But one commentator in particular persistently condemned the nonprofit Wikipedia for stealing the show from traditional for profit encyclopedias, and worse for robbing the knowledge producers:

aestu Oct 24, 2013
... You try to make it sound like for-profit encyclopedia processors are the maligned "1%" profiting at the expense of everyone else. In reality, publishing has always been a low-margin business, and not only are the encyclopedia firms struggling to get by, but so are the original researchers that produce the knowledge that fill those volumes and whose efforts are repaid through consultancies and other fees.
As they say, why buy the cow if the milk is free? How is research to be funded by the Wikipedia model?

aestu Oct 24, 2013
... Wikipedia's relationship with the world of intellectual discovery is parasitical. Academics and professionals produce the knowledge on their own time and at the expense of themselves or their institutions. Wikipedia collects and posts it so that any idiot can Google it up for free, but does nothing to fund or otherwise encourage the discovery of knowledge. 
...
Wikipedia has been a locust swarm for many knowledge experts, from historians to scientists to literary experts, who rely on consulting fees to put money in the fridge and pay for their training and education. Wikipedia deprives these people of their livelihoods.

aestu Oct 24, 2013
There is no free lunch. If research is not funded, there is no research. Wikipedia relies on research for its content. 
Encyclopedia publishers are not volunteers, and the research they put in their materials comes from institutions and individuals who are professionals at what they do. The fees that encyclopedia firms pay out to those whose content populates their volumes pays for their professional pursuit of knowledge. 
Obviously, it's cheaper to simply take than to buy, and that is why Wikipedia has an advantage. Wikipedia doesn't pay back to the system that creates the content it uses. 

aestu Oct 24, 2013
@ray-rogers @aestu  
... Just because the use of a public library is free to you, the end user, doesn't mean that the library is free in the absolute. Many libraries around the world are feeling the crunch because Wikipedia accomplishes the same goal without taxes or fees, but doesn't fund research in the way libraries buying books for their shelves does. 

aestu Oct 25, 2013
Wikipedia can't do that because traditional encyclopedia editors are paid, tenured professionals whose cost is recouped in sales of the encyclopedias they edit.
You can't hire "actual experts" and keep WP free. There is no free lunch. Or, to be more accurate, you get what you pay for. 

dsgarnett Nov 21, 2013
... And where it's weak appears to be the result (more and more often now) of a lack of a business reason for it to be strong. 

Seems most out of 310 comments altogether ignored those, but a handful of the commentators replied:

apostasyusa Oct 24, 2013
... If there are in fact competitors to Wikipedia, and they are for profit entities that are failures at competing with a nonprofit, then what does that say about how they are running their business? 

Maybe they are giving way too much of their revenue to fund research?!
What sort of examples do you have that demonstrate for profit encyclopedia publishers funding research?

apostasyusa Oct 24, 2013
@aestu
Wikipedia isn't free.  People, including myself, donate to it.

You make it sound as if all research ever conducted is funded by encyclopedias. Can you show us all examples of research funded by encyclopedia companies.  You make it sound as if their contribution is sorely missed and research that would otherwise be conducted is not.

You also make it seem as if the knowledge that research funding compiles, should only be to the benefit of those who pay for it.  In that case encyclopedia Britannica would only be allowed to publish the data the company helped to find through research donations. ...

ray-rogers Oct 24, 2013
@aestu In fact I never said or meant that the availability and acquisition of knowledge was "free".  The abstract point I was trying make is that the distribution and availability of knowledge is a public and human good.

But to your point, the actual establishment of many public libraries was considered a useful charity; Benjamin Franklin and Carnegie were major contributors giving more or less freely for what they, and I, consider the public good.  ...

apostasyusa Oct 25, 2013
 ... Wikipedia only exists because of donations. The fact that usage if free is of no consequence. ... Fortunately for Wikipedia, people think it is important enough to fund it independently and that the organization as a whole is fairly inexpensive to operate.

gnorn Oct 25, 2013
@aestu When did encyclopedias ever "fund original research"?  Not for centuries.

mspacek Oct 26, 2013
@aestu Gosh. Could you please provide some references for exactly how Wikipedia freeloads off of original research? Since when is anyone or anything responsible for paying for references? By your logic, scientific articles should pay something to every other article they reference, because otherwise how could real original research be funded?

More to the point, how does referencing a source without contributing to it take anything away from that source? It does the opposite. Increasing the visibility of a valuable source is itself a valuable service. You have Wikipedia to thank for that service, which it provides for free.

mspacek Oct 26, 2013
@aestu Your arguments boil down to a love of centralization and executive power, and a hatred and distrust of distributed power. 

gnorn Nov 21, 2013
@dsgarnett Believe it or not, a lot of people are motivated by an altruistic drive to share knowledge.

What can we say? Some seems to have an inherent distrust of all altruistic efforts and would quickly dismiss them as sham or condemn their products as inferior and wasteful or even as parasitic. I guess that such distrust may more likely be traceable to cultural origins than reason, and therefore mostly ignorable.

You'll find lots and lots of arguments and verdicts for and against the phenomenon that is Wikipedia. I don't know if any other source of knowledge printed or on-line, contained extensive documentation like Wiki for the types of criticisms leveled at it ("Criticism of Wikipedia"), quotations from critics of Wikipedia ("Wikipedia: Criticisms"), the reliability of Wikipedia compared to other encyclopedias and more specialized sources ("Reliability of Wikipedia") and even "Wikipedia: List of hoaxes on Wikipedia".

Now, if you need guidance in a nutshell from some prestigious academic institution on how to use Wikipedia, you may like to try this (cited in Criticism of Wikipedia).

The Academic Integrity at MIT handbook for students at Massachusetts Institute of Technology states: 'Wikipedia is Not a Reliable Academic Source: The bibliography published at the end of the Wikipedia entry may point you to potential sources. However, do not assume that these sources are reliable – use the same criteria to judge them as you would any other source. Do not consider the Wikipedia bibliography as a replacement for your own research."

MIT is the second ranked university out of 500 in the Best Global Universities ranking by the current U. S. News and World Report. However, you would be wiser after reading The Order of Things: What college rankings really tell us by Malcolm Gladwell in New Yorker, February 14, 2011 (http://www.newyorker.com/magazine/2011/02/14/the-order-of-things?currentPage=all). It's another story, though.




                                        



Saturday, February 21, 2015

Blind leading the 20/20


My friend's daughter doing her Ph. D in Down Under asked me to advise her on software to use for Input-Output analysis. Sometime before, I was lucky to have looked for such software here in Myanmar at the request of my younger friends from the academia. With 0% of I-O analysis knowledge and less than 1% of knowledge for the R statistical environment I found out about the rwiot software package developed by UNIDO, thanks to Google. It was destined for the main repository of R: cran or the comprehensive R archive network. But it wasn't available on cran that time, and still not yet available on it. However, the developers directed me to the download site and anyone can download the two packages, rwiot and rwiotData, as Windows binaries, from the following links:


So I immediately replied to her about rwiot. Afterwards I noticed that, in a hurry, I missed the point that she was thinking about doing the I-O analysis via the GEMPACK software. I looked it up and found that it is for making general equilibrium economic analysis.

With a slant for R, I looked up for general equilibrium modeling in R and I was led to the gEcon package: http://gecon.r-forge.r-project.org/ and its description reads:

About gEcon

gEcon is a framework for developing and solving large scale dynamic (stochastic) & static general equilibrium models. It consists of model description language and an interface with a set of solvers in R. It was developed at the Department for Strategic Analyses at the Chancellery of the Prime Minister of the Republic of Poland as a part of a project aiming at construction of large scale DSGE & CGE models of the Polish economy.

Publicly available toolboxes used in RBC/DSGE modelling require users to derive the first order conditions (FOCs) and linearisation equations by pen & paper (e.g. Uhlig’s tool-kit) or at least require manual derivation of the FOCs (e.g. Dynare). Derivation of FOCs is also required by GAMS and GEMPACK — probably the two most popular frameworks used in CGE modelling. Owing to the development of an algorithm for automatic derivation of first order conditions and implementation of a comprehensive symbolic library, gEcon allows users to describe their models in terms of optimisation problems of agents. To authors' best knowledge there is no other publicly available framework for writing and solving DSGE & CGE models in this natural way. Writing models in terms of optimisation problems instead of the FOCs is far more natural to an economist, takes off the burden of tedious differentiation, and reduces the risk of making a mistake. gEcon allows users to focus on economic aspects of the model and makes it possible to design large-scale (100+ variables) models. To this end, gEcon provides template mechanism (similar to those found in CGE modelling packages), which allows to declare similar agents (differentiated by parameters only) in a single block. Additionally, gEcon can automatically produce a draft of LaTeX documentation for a model.

The model description language is simple and intuitive. Given optimisation problems, constraints and identities, computer derives the FOCs, steady state equations, and linearisation matrices automatically. Numerical solvers can be then employed to determine the steady state and approximate equilibrium laws of motion around it.


If gEcon has capability comparable to GEMPACK I felt it would be good to select the gEcon (i) as GEMPACK is not free, and (ii) even in case that funds are available for it from the university. Notwithstanding my zero knowledge of general equilibrium modeling, I am certainly impressed with gEcon when the introduction says:

To authors' best knowledge there is no other publicly available framework for writing and solving DSGE & CGE models in this natural way. Writing models in terms of optimisation problems instead of the FOCs is far more natural to an economist, takes off the burden of tedious differentiation, and reduces the risk of making a mistake. gEcon allows users to focus on economic aspects of the model and makes it possible to design large-scale (100+ variables) models.

So by choosing gEcon, when she comes back to her own country she could freely share the gEcon software as well as her knowledge with the local researchers as much as she would like.

The difference between GEMPACK and gEcon, I guess, will be that the former would be a stand-alone package (?) while the latter runs on R. That means she would need to learn the basics of R before she could use gEcon. However, that shouldn't be too difficult. I think her university would have some course on R so she could have started running gEcon in no time, or she could try any number of tutorials available on the Web. Anyway, she and her supervisors will be ones to decide if that would be GEMPACK or any available alternative they would actually use.

On the other hand, I guess that the establishment and the academia are basically suspicious of free and open source software, for example, and have a general distrust of something that is free—in the sense that it must somehow be inferior, seen in such examples as Wikipedia vs. Britannica, crowd vs. experts, collaborative vs. hierarchical and so on.

One difference is that gEcon is hardly anything of some free-wheeling software. It is a tool developed specifically for use by the Polish government and so it is worthy of being taken seriously by anyone. Therefore I should be more than happy if this message got through to any interested Myanmar researcher.




Thursday, February 5, 2015

Little data: are we really (data) poor?


I remember an interesting heading of a newspaper article that appeared in the government-owned The Working People's Daily (English) a long time ago in Yangon. Presently fascinated with little data and beginning to run out of ideas, I tried adding "little data" and "(data)" in that heading to see how it would come out.

Now inspired, I begin to see that an absence of data makes the situation a little bit tricky though it doesn't translates to no unemployment or no disease or no illiteracy or no depletion of resources or no hunger or no poverty, for example. Neither could you feel defeated and assume they existed and at an alarming scale. If you don't have data to guide you, you could be led to follow a false trail, either an optimistic one or a pessimistic one, and get lost.

And there are many pitfalls with the data, like thinking big data = all data. In little data it is a sin to ignore the missing data. There are two kinds. When you can't get data from some units in the sample, it is called "unit non-response". When values are missing certain variables you are collecting they are "item non-response". Statisticians warn that if you couldn't get data from a significant number of households, for example, you can't simply report the results from the households from which you have data as if data came from the complete sample. Similarly with the missing values like age, weight or height of children, for example.

When you ignore the missing households and just simply use the data you have, it means you are assuming that the data from the missing households won't be different from those you get the data. More often than not, when you can't locate the households or the eligible respondents, or they were not at home, or they even refused to answer then there is some reasons behind that make them different from other households that answered the survey questions. Similarly, you can't just assume away that the typical values of age, weight, or height that are missing are the same as those for the children with no missing values.

UNICEF has cautioned about such problems in their manual for the MICS2 survey. For unit non-response: "Sample surveys like MICS are usually able to obtain response rates of at least 90%. If your survey has response rates lower than 90%, you should be aware that your results may be biased."  For item non-response: "Any variable with 10% or more of the values missing should be used with caution ... If the proportion of missing values is very high you may decide not to use the variable in the analysis at all." (p. 8.2)

An example that illustrates a serious case of item missing comes from Multiple Indicator Cluster Survey (MICS) for the year 2000 for Myanmar (http://www.childinfo.org/files/myanmartables.pdf).

With missing height or weight data ranging from 38% to 46% you could see that the information reported in "Table 15: Percentage of under-five children who are severely or moderately undernourished, Myanmar, 2000" would be too risky to be used. The report didn't try to cover that flaw and you can see that this second table gives data only from 8,101 children that did not have height or weight data missing. It is this kind of situation that the big data people were pointing out as one weakness of little data. They noted that data missingness in surveys is getting worse these days. On the other hand it is obvious that we need to be data rich (not poor in quantity) and have good data (not poor in quality) as well.

MICS are surveys that
"run under the program developed by the United Nations Children's Fund to provide internationally comparable, statistically rigorous data on the situation of children and women. The first round of surveys (MICS1) was carried out in over 60 countries in 1995 in response to the World Summit for Children. A second round (MICS2) in 2000 increased the depth of the survey, allowing monitoring of multiple indicators. A third round (MICS3) started in 2006 and aimed at producing data measuring progress toward the Millennium Development Goals (MDGs), A World Fit for Children, and other major relevant international commitments. The fourth round, launched in 2009, aimed at most data collection conducted in 2010, but in reality most MICS4s were implemented in 2011 and even into 2012 and 2013. This represented a scale-up of frequency of MICS from UNICEF, now offering the survey programme on a three-year cycle." (Multiple Indicator Cluster Surveys, Wikipedia)


We noticed that the fifth round of MICS scheduled for 2012-2014 and not yet mentioned by Wikipedia has to be completed before the end of 2013 or very early in 2014 (http://www.childinfo.org/mics5.html). The fifth round of MICS surveys will be one of the sources critically important for final MDG reporting since they will generate data on more than 20 MDG indicators. The United Nations Secretary General’s Final MDG Progress Report will be launched in September 2015.

However, MICS5 website does not mention Myanmar in its list of confirmed survey for this survey round. We don't know if that means Myanmar is not implementing MICS5.  The MICS website lists three reports for Myanmar: (i) round-1, 1995 (ii) round-2 or end-decade, 2000 (tables only) (iii) round-3, 2009-10. Since UNICEF mentioned that MICS5 will produce 20 or more of the MDG indicators, if Myanmar is not conducting MICS5, it seems the government would have alternative plans for producing the needed data along with those for generating the other remaining indicators for the MDGs. As will be seen later, the IHLCA survey seems to be one of those replacing it.

What does little data say about our position with the MDGs?
I could find on the Web only two reports coming out of Myanmar on MDG achievements. One is "Myanmar_MDGReport_2005.pdf" prepared by the Ministry of National Planning and Economic Development. (http://www.undp.org/content/dam/undp/library/MDG/english/MDG%20Country%20Reports/Myanmar/Myanmar_MDGReport_2005.pdf).
An interesting point mentioned in this report regarding the methodology for generation of the MDG indicators is for the IHLCA project is to close some data gaps, particularly on poverty, and in general for some other indicators of MDGs:

Purchasing Power Parity (PPP) ratio is one of the indicators being used to measure poverty by the international organizations. Because of the complexity of computing PPP ratio, Myanmar never had experiences on measuring poverty by using PPP ratio.... the Government has decided to implement Integrated Household Living Conditions Assessment Project (IHLCA) with the assistance of UNDP. This is the first project being undertaken since 2003, in cooperation with the UNDP, with the aim to assess poverty through conducting a very comprehensive survey over the whole country. The IHLCA project has been jointly implemented by the Planning Department and Central Statistical Organization of Ministry of National Planning and Economic Development in collaboration with the IDEA Canadian International Consultant Firm. (p. 15)

IHLCA has been expected to produce the following 20 MDG indicators (pp. 17-18) out of a total of 48:

Goals

Indicator
Goal 1: Eradicate extreme poverty and hunger
1,2,3,4
Goal 2: Achieve universal primary education
6,7,8
Goal 3: Promote gender equality and empower women
9,10,11
Goal 4: Reduce child mortality
13,14,15
Goal 5: Improve maternal health
16,17
Goal 7: Ensure environmental sustainability
29,30,31
Goal 8: Develop a Global Partnership for Development
45,47

The other is UN Country Team's report, "Thematic Analysis 2011: Achieving the Millennium Development Goals in Myanmar" (http://www.undp.org/content/dam/undp/library/MDG/english/MDG%20Country%20Reports/Myanmar/Thematic-Analysis-2011-for-Myanmar.pdf ).

However, it is not easy to find data for Myanmar on the Web first because may be there is not too many of them on the Web. Secondly it is not easy to find the links for the available data and I am not talking about the micro-data, but macro-data implying the survey reports.  For example searching with Google I found one IHLCA report and from it I know that "Quality Report" of IHLCA exists. But I couldn't find the link for it or for any other IHLCA reports on the UNDP website where I got the first report. After quite a bit of frustration I discovered that MIMU website contain all the links for IHLCA plus many from UN agencies and INGOs.

Nevertheless, when I search for "MDG" I couldn't get any results. Upon searching for "millennium development goals" I got results for reports mixed with a lot of names of people associated with different projects. I don't know if the results could be arranged by "relevance" or any such criterion. When I downloaded search result "Ref_Doc_Achieving_the_Millennium_Development_Goals_in_Myanmar_2011.pdf" it happens to be the same report, "Thematic-Analysis-2011-for-Myanmar.pdf" I downloaded from the UNDP website!

Talking about IHLCA, the report "Independent Assessment Mission on the
Human Development Initiative Myanmar, Covering the period June 2011 to May 2012" (http://erc.undp.org/evaluationadmin/downloaddocument.html?docid=6156) noted:

"The Integrated Household Living Conditions Assessment (IHLCA), which has examined the extent, nature and causes of poverty in Myanmar has now provided a widely cited baseline for the country to track its progress on MDGs. But the IHLCA has limited utilisation of data and results, as it still remains under applied in project planning. Additionally, access to the database remains restricted to pre-approval by the government led steering group." (p. 5)
"... the IHLCA-2, also assisted in developing the first purchasing power parity (PPP) estimates for the country." (p. 18)

Looking for the PPP estimate for Myanmar, promised since the first round of IHLCA, we found that one international consultant with two Myanmar consultants had conducted a PPP survey in 2011 (Completion of IHLCA Project Report, UNDP and SIDA, 31 September, p. 7, http://www.undp.org/content/dam/undp/documents/projects/MMR/SIDA%20Rpt%20Completion%20of%20IHLCA%20Project%20Report%20Nov%2021%20EM%2BDK%20%282%29.pdf). However, we couldn't find the related report, not even in MIMU website.

On the other hand, the report "Analysis of data sources and gaps for monitoring living conditions in the Union of Myanmar" (http://themimu.info/sites/themimu.info/files/documents/Report_Survey_IHLCA_AnalysisDataSource-Gaps_MNPED-UNDP_2010_MMR.pdf), and particularly its table-1 gives quite a long list of data sources in Myanmar having potential for monitoring not only for the MDGs but also for monitoring other national and international goals for improving living conditions in Myanmar.

So we are not that data poor? Well I don't know. Still need to watch out for the data revolution, and the sustainable development goals of the post-2015 UN development agenda. In the meantime, we give below the snapshot of progress for MDG as given in the UN Official website for MDG indicators (http://mdgs.un.org/unsd/mdg/Host.aspx?Content=Data/snapshots.htm) for those who are too lazy to look through the IHLCA reports. For the missing poverty incidence in terms of living below $1.25(PPP) per day, they'll have to consult the relevant IHLCA reports for the alternative estimate after all.