tag:blogger.com,1999:blog-65687441969826342892024-02-18T21:16:39.252-08:00Ondřej ČertíkAnonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.comBlogger61125tag:blogger.com,1999:blog-6568744196982634289.post-20336036964494413882013-08-04T00:40:00.002-07:002013-08-04T00:48:05.502-07:00How to support both Python 2 and 3<p>I'll start with the conclusion: making backwards incompatible version of a language is a terrible idea, and it was bad a mistake. This mistake was somewhat corrected over the years by eventually adding features to both Python 2.7 and 3.3 that actually allow to run a single code base on both Python versions --- which, as I show below, was discouraged by both Guido and the official Python documents (though the latest <a href="http://docs.python.org/dev/howto/pyporting.html">docs</a> mention it)... Nevertheless, a single code base fixes pretty much all the problems and it actually is fun to use Python again. The rest of this post explains my conclusion in great detail. My hope is that it will be useful to other Python projects to provide tips and examples how to support both Python 2 and 3, as well as to future language designers to keep languages backwards compatible.</p>
<p>When Python 3.x got released, it was pretty much a new language, backwards incompatible with Python 2.x, as it was not possible to run the same source code in both versions. I was extremely unhappy about this situation, because I simply didn't have time to port all my Python code to a new language.</p>
<p>I read the <a href="http://www.python.org/dev/peps/pep-3000/#compatibility-and-transition">official documentation</a> about how the transition should be done, quoting:</p>
<blockquote>
<p>You should have excellent unit tests with close to full coverage.</p>
<ol>
<li>Port your project to Python 2.6.</li>
<li>Turn on the Py3k warnings mode.</li>
<li>Test and edit until no warnings remain.</li>
<li>Use the 2to3 tool to convert this source code to 3.0 syntax. Do not manually edit the output!</li>
<li>Test the converted source code under 3.0.</li>
<li>If problems are found, make corrections to the 2.6 version of the source code and go back to step 3.</li>
<li>When it's time to release, release separate 2.6 and 3.0 tarballs (or whatever archive form you use for releases).</li>
</ol>
</blockquote>
<p>I've also read Guido's blog <a href="http://www.artima.com/weblogs/viewpost.jsp?thread=208549">post</a>, which repeats the above list and adds an encouraging comment:</p>
<blockquote>
<p>Python 3.0 will break backwards compatibility. Totally. We're not even aiming for a specific common subset.</p>
</blockquote>
<p>In other words, one has to maintain a Python 2.x code base, then run <code>2to3</code> tool to get it converted. If you want to develop using Python 3.x, you can't, because all code must be developed using 2.x. As to the actual porting, Guido says in the above post:</p>
<blockquote>
<p>If the conversion tool and the forward compatibility features in Python 2.6 work out as expected, steps (2) through (6) should not take much more effort than the typical transition from Python 2.x to 2.(x+1).</p>
</blockquote>
<p>So sometime in 2010 or 2011 I started porting <a href="http://sympy.org">SymPy</a>, which is now a pretty large code base (<code>sloccount</code> says over 230,000 lines of code, and in January 2010 it said almost 170,000 lines). I remember spending a few full days on it, and I just gave up, because it wasn't just changing a few things, but pretty fundamental things inside the code base, and one cannot just do it half-way, one has to get all the way through and then polish it up. We ended up using one full Google Summer of Code project for it, you can read the <a href="https://github.com/sympy/sympy/wiki/GSoC-2011-Report-Vladimir-Peric%3A-Porting-to-Python-3">final report</a>. I should mention that we use metaclasses and other things, that make such porting harder.
Conclusion: this was definitely not "the typical transition from Python 2.x to 2.(x+1)".</p>
<p>Ok, after months of hard work by a lot of people, we finally have a Python 2.x code base that can be translated using the <code>2to3</code> tool and it works and tests pass in Python 3.x.</p>
<p>The next problem is that Python 3.x is pretty much like a ghetto --
you can use it as a user, but you can't develop in it. The <code>2to3</code> translation takes over 5 minutes on my laptop, so any interactivity is gone. It is true that the tool can cache results, so the next pass is somewhat faster, but in practice this still turns out to be much much worse than any compilation of C or Fortran programs (done for example with <code>cmake</code>), both in terms of time and in terms of robustness.
And I am not even talking about pip <a href="https://github.com/pypa/pip/issues/701">issues</a> or setup.py <a href="https://github.com/sympy/sympy/pull/2262">issues</a> regarding calling <code>2to3</code>.
What a big mess... Programming should be fun, but this is not fun.</p>
<p>I'll be honest, this situation killed a lot of my enthusiasm for Python as a platform. I learned modern Fortran in the meantime and with admiration I noticed that it still compiles old F77 programs without modification and I even managed to compile a 40 year old pre-F77 code
with just minimal modifications (I had to port the code to F77). Yet modern Fortran is pretty much a completely different language, with all the fancy features that one would want. Together with my colleagues I created a <a href="http://fortran90.org">fortran90.org</a> website, where you can compare Python/NumPy side by side with modern Fortran, it's pretty much 1:1 translation and a similar syntax (for numerical code), except that you need to add types of course. Yet Fortran is fully backwards compatible. What a pleasure to work with!</p>
<p>Fast forward to last week. A heroic effort by <a href="https://github.com/flacjacket">Sean Vig</a> who ported SymPy to single code base (<a href="https://github.com/sympy/sympy/pull/2318">#2318</a>) was merged. Earlier this year similar pull requests by other people have converted NumPy (<a href="https://github.com/numpy/numpy/pull/3178">#3178</a>,
<a href="https://github.com/numpy/numpy/pull/3191">#3191</a>,
<a href="https://github.com/numpy/numpy/pull/3201">#3201</a>,
<a href="https://github.com/numpy/numpy/pull/3202">#3202</a>,
<a href="https://github.com/numpy/numpy/pull/3203">#3203</a>,
<a href="https://github.com/numpy/numpy/pull/3205">#3205</a>,
<a href="https://github.com/numpy/numpy/pull/3208">#3208</a>,
<a href="https://github.com/numpy/numpy/pull/3216">#3216</a>,
<a href="https://github.com/numpy/numpy/pull/3223">#3223</a>,
<a href="https://github.com/numpy/numpy/pull/3226">#3226</a>,
<a href="https://github.com/numpy/numpy/pull/3227">#3227</a>,
<a href="https://github.com/numpy/numpy/pull/3231">#3231</a>,
<a href="https://github.com/numpy/numpy/pull/3232">#3232</a>,
<a href="https://github.com/numpy/numpy/pull/3235">#3235</a>,
<a href="https://github.com/numpy/numpy/pull/3236">#3236</a>,
<a href="https://github.com/numpy/numpy/pull/3237">#3237</a>,
<a href="https://github.com/numpy/numpy/pull/3238">#3238</a>,
<a href="https://github.com/numpy/numpy/pull/3241">#3241</a>,
<a href="https://github.com/numpy/numpy/pull/3242">#3242</a>,
<a href="https://github.com/numpy/numpy/pull/3244">#3244</a>,
<a href="https://github.com/numpy/numpy/pull/3245">#3245</a>,
<a href="https://github.com/numpy/numpy/pull/3248">#3248</a>,
<a href="https://github.com/numpy/numpy/pull/3249">#3249</a>,
<a href="https://github.com/numpy/numpy/pull/3257">#3257</a>,
<a href="https://github.com/numpy/numpy/pull/3266">#3266</a>,
<a href="https://github.com/numpy/numpy/pull/3281">#3281</a>,
<a href="https://github.com/numpy/numpy/pull/3191">#3191</a>, ...) and SciPy (<a href="https://github.com/scipy/scipy/pull/397">#397</a>) codes as well.
Now all these projects have just one code base and it works in all Python versions (2.x and 3.x) without the need to call the <code>2to3</code> tool.</p>
<p>Having a single code base, programming in Python is fun again. You can choose any Python version, be it 2.x or 3.x, and simply submit a patch. The patch is then tested using <a href="https://travis-ci.org/">Travis-CI</a>, so that it works in all Python versions. Installation has been simplified (no need to call any <code>2to3</code> tools and no more hacks to get <code>setup.py</code> working).</p>
<p>In other words, this is how it should be, that you write your code once, and you can use any supported language version to run it/compile it, or develop in. But for some reason, this obvious solution has been discouraged by Guido and other Python documents, as seen above. I just looked up the latest official Python <a href="http://docs.python.org/dev/howto/pyporting.html">docs</a>, and that one is not upfront negative about a single code base. But it still does not recommend this approach as <em>the</em> one. So let me fix that: I do recommend a single code base as <em>the</em> solution.</p>
<p>The newest Python documentation from the last paragraph also mentions</p>
<blockquote>
<p>Regardless of which approach you choose, porting is not as hard or time-consuming as you might initially think.</p>
</blockquote>
<p>Well, I encourage you to browse through the pull requests that I linked to above for SymPy, NumPy or SciPy. I think it is very time consuming, and that's just converting from <code>2to3</code> to single code base, which is the easy part. The hard part was to actually get SymPy to work with Python 3 (as I discussed above, that took couple months of hard work), and I am pretty sure it was pretty hard to port NumPy and SciPy as well.</p>
<p>The docs also says:</p>
<blockquote>
<p>It /single code base/ does lead to code that is not entirely idiomatic Python</p>
</blockquote>
<p>That is true, but our experience has been, that with every Python version that we drop, we also delete lots of ugly hacks from our code base. This has been true for dropping support for 2.3, 2.4 and 2.5, and I expect it will also be true for dropping 2.6 and especially 2.7, when we can simply use the Python 3.x syntax. So not a big deal overall.</p>
<p>To sum this blog post up, as far as I am concerned, pretty much all the problems with supporting Python 2.x and 3.x are fixed by having a single code base. You can read the pull requests above to see how to implemented things (like metaclasses, and other fancy stuff...). Python is still quite the same language, you write your code, you use a Python version of your choice and things will just work. Not a big deal overall. The official documentation should be fixed to recommend this approach, and deprecate the other approaches.</p>
<p>I think that Python is great and I hope it will be used more in the future.</p>
<blockquote>
<p>Written with <a href="http://benweet.github.io/stackedit/">StackEdit</a>.</p>
</blockquote>Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com0tag:blogger.com,1999:blog-6568744196982634289.post-19687604740472613732013-07-01T20:03:00.002-07:002013-07-02T11:05:05.379-07:00My impressions from the SciPy 2013 conferenceI have attended the <a href="https://conference.scipy.org/scipy2013/">SciPy 2013 conference</a> in Austin, Texas. Here are my impressions.<br />
<br />
Number one is the fact that the <a href="http://ipython.org/notebook.html">IPython notebook</a> was used by pretty much everyone. I use it a lot myself, but I didn't realize how ubiquitous it has become. It is quickly becoming the standard now. The IPython notebook is using Markdown and in fact it is better than Rest. The way to remember the "[]()" syntax for links is that in regular text you put links into () parentheses, so you do the same in Markdown, and append [] for the text of the link. The other way to remember is that [] feel more serious and thus are used for the text of the link. I stressed several times to <a class="g-profile" href="http://plus.google.com/105051551851350439748" target="_blank">+Fernando Perez</a> and <a class="g-profile" href="http://plus.google.com/110706953761515533762" target="_blank">+Brian Granger</a> how awesome it would be to have interactive widgets in the notebook. Fortunately that was pretty much preaching to the choir, as that's one of the first things they plan to implement good foundations for and I just can't wait to use that.<br />
<br />
It is now clear, that the IPython notebook is <i>the</i> way to store computations that I want to share with other people, or to use it as a "lab notebook" for myself, so that I can remember what exactly I did to obtain the results (for example how exactly I obtained some figures from raw data). In other words --- instead of having sets of scripts and manual bash commands that have to be executed in particular order to do what I want, just use IPython notebook and put everything in there.<br />
<div>
<br /></div>
Number two is that how big the conference has become since the last time I attended (couple years ago), yet it still has the friendly feeling. Unfortunately, I had to miss a lot of talks, due to scheduling conflicts (there were three parallel sessions), so I look forward to seeing them on video.<br />
<br />
<a class="g-profile" href="http://plus.google.com/111657756858197263626" target="_blank">+Aaron Meurer</a> and I have done the <a href="https://conference.scipy.org/scipy2013/tutorial_detail.php?id=101">SymPy tutorial</a> (see the link for videos and other tutorial materials). It's been nice to finally meet <a class="g-profile" href="http://plus.google.com/109882876523836932473" target="_blank">+Matthew Rocklin</a> (very active SymPy contributor) in person. He also had an interesting presentation<br />
about symbolic matrices + Lapack code generation. <a class="g-profile" href="http://plus.google.com/110966557175293116547" target="_blank">+Jason Moore</a> presented PyDy.<br />
It's been a great pleasure for us to invite <a class="g-profile" href="http://plus.google.com/112898427768461421869" target="_blank">+David Li</a> (still a high school student) to attend the conference and give a presentation about his work on <a href="http://sympygamma.com/">sympygamma.com</a> and <a href="http://live.sympy.org/">live.sympy.org</a>. <br />
<br />
It was nice to meet the Julia guys, <a class="g-profile" href="http://plus.google.com/104556984762576706263" target="_blank">+Jeff Bezanson</a> and <a class="g-profile" href="http://plus.google.com/106011176106265292073" target="_blank">+Stefan Karpinski</a>. I contributed the Fortran benchmarks on the Julia's website some time ago, but I had the feeling that a lot of them are quite artificial and not very meaningful. I think Jeff and Stefan confirmed my feeling. Julia seems to have quite interesting type system and multiple dispatch, that SymPy should learn from.<br />
<br />
I met the VTK guys <a class="g-profile" href="http://plus.google.com/117876587154330177046" target="_blank">+Matthew McCormick</a> and <a class="g-profile" href="http://plus.google.com/101821293118663904053" target="_blank">+Pat Marion</a>. One of the keynotes was given by <a class="g-profile" href="http://plus.google.com/105850319012996814544" target="_blank">+Will Schroeder</a> from Kitware about publishing. I remember him stressing to manage dependencies well as well as to use BSD like license (as opposed to viral licenses like GPL or LGPL). That opensource has pretty much won (i.e. it is now clear that that is the way to go).<br />
<br />
I had great discussions with <a class="g-profile" href="http://plus.google.com/110321315348047026192" target="_blank">+Francesc Alted</a>, <a class="g-profile" href="http://plus.google.com/106486100774697058597" target="_blank">+Andy Terrel</a>, <a class="g-profile" href="http://plus.google.com/101681894269884323551" target="_blank">+Brett Murphy</a>, <a class="g-profile" href="http://plus.google.com/106311201257953729437" target="_blank">+Jonathan Rocher</a>, <a class="g-profile" href="http://plus.google.com/104443012252347946160" target="_blank">+Eric Jones</a>, <a class="g-profile" href="http://plus.google.com/110081663265512493333" target="_blank">+Travis Oliphant</a>, <a class="g-profile" href="http://plus.google.com/113023052893273684980" target="_blank">+Mark Wiebe</a>, <a class="g-profile" href="http://plus.google.com/111480164393314519931" target="_blank">+Ilan Schnell</a>, <a class="g-profile" href="http://plus.google.com/104831275312843762750" target="_blank">+Stéfan van der Walt</a>, <a class="g-profile" href="http://plus.google.com/108585099026155174191" target="_blank">+David Cournapeau</a>, <a class="g-profile" href="http://plus.google.com/116439624339414215461" target="_blank">+Anthony Scopatz</a>, <a class="g-profile" href="http://plus.google.com/105656345164808608690" target="_blank">+Paul Ivanov</a>, <a class="g-profile" href="http://plus.google.com/105753151742916635749" target="_blank">+Michael Droettboom</a>, <a class="g-profile" href="http://plus.google.com/102862681530523589739" target="_blank">+Wes McKinney</a>, <a class="g-profile" href="http://plus.google.com/101985752635828670051" target="_blank">+Jake Vanderplas</a>, <a class="g-profile" href="http://plus.google.com/115993532093165217430" target="_blank">+Kurt Smith</a>, <a class="g-profile" href="http://plus.google.com/105827017409315158569" target="_blank">+Aron Ahmadia</a>, <a class="g-profile" href="http://plus.google.com/110786793649177800375" target="_blank">+Kyle Mandli</a>, <a class="g-profile" href="http://plus.google.com/103293634823298090368" target="_blank">+Benjamin Root</a> and others. <br />
<br />
<br />
It's also been nice to have a chat with <a class="g-profile" href="http://plus.google.com/105603936345485877784" target="_blank">+Jason Vertrees</a> and other guys from <a href="https://www.schrodinger.com/">Schrödinger</a>.<br />
<br />
One other thing that I realized last week at the conference is that pretty much everyone agreed on the fact that NumPy should act as the default way to represent memory (no matter if the array was created in Fortran or other code) and allow manipulations on it. Faster libraries like <a href="http://blaze.pydata.org/">Blaze</a> or <a href="http://blog.enthought.com/general/enthought-awarded-1m-doe-sbir-grant-to-develop-open-source-python-hpc-framework/">ODIN</a> should then hook themselves up into NumPy using <a href="http://en.wikipedia.org/wiki/Multiple_dispatch">multiple dispatch</a>. Also SymPy would then hook itself up so that it can be used with array operations natively. Currently SymPy does work with NumPy (see our <a href="https://github.com/sympy/sympy/blob/master/sympy/external/tests/test_numpy.py">tests</a> for some examples what works), but the solution is a bit fragile (it is not possible to override NumPy behavior, but because NumPy supports general objects, we simply give it SymPy objects and things mostly work).<br />
<br />
Similar to this, I would like to create multiple dispatch in SymPy core itself, so that other (faster) libraries for symbolic manipulation can hook themselves up, so that their own (faster) multiplication, expansion or series expansion would get called instead of the SymPy default one implemented in pure Python.<br />
<br />
Other blog posts from the conference:<br />
<ul>
<li>Aaron's <a href="http://asmeurersympy.wordpress.com/2013/07/02/scipy-2013/">post</a></li>
<li>Fernando's <a href="http://blog.fperez.org/2013/07/in-memoriam-john-d-hunter-iii-1968-2012.html">post</a></li>
</ul>
Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com0tag:blogger.com,1999:blog-6568744196982634289.post-23471009759832609262012-09-02T14:57:00.000-07:002012-09-02T15:08:15.141-07:00How to make pdflatex accept .eps imagesUnfortunately, <code>pdflatex</code> does not support <code>.eps</code> images, for example the
following code fails:<br />
<pre><code>\usepackage{graphicx}
...
\includegraphics{qc.eps}
</code></pre>
The fix is to put the following lines
at the top of the document:<br />
<pre><code>\usepackage{epstopdf}
\epstopdfsetup{suffix=}
\DeclareGraphicsRule{.eps}{pdf}{.pdf}{`epstopdf #1}
</code></pre>
and compile with the <code>-shell-escape</code> flag:<br />
<pre><code>pdflatex -shell-escape my_file.tex
</code></pre>
Then the above code works. You need the <code>epstopdf</code> program that is run behind
the scene.<br />
<br />
<b>Update</b>:
Apparently, it's enough to just include the following line at the top of the document:
<br />
<pre><code>\usepackage{epstopdf}
</code></pre>
One still has to compile with the <code>-shell-escape</code> flag. This works for me in Ubuntu 12.04.
<br />
<br />
Thanks to: <a href="http://chi3x10.wordpress.com/2009/06/18/eps-and-pdflatex-no-more-converting-eps-to-pdf/">http://chi3x10.wordpress.com/2009/06/18/eps-and-pdflatex-no-more-converting-eps-to-pdf/</a>Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com1tag:blogger.com,1999:blog-6568744196982634289.post-73747220014465144852012-06-04T18:18:00.000-07:002012-06-04T18:18:17.332-07:00How to convert scanned images to pdfFrom time to time I need to convert scanned documents to a pdf format.<br />
<i><br /></i><br />
<i>Usage scenario 1</i>: I scan part of a book (i.e. some article) on a school's scanner that sends me 10 big separate color pdf files (one pdf per page). I want to get one nice, small (black and white) pdf file with all the pages.<br />
<i><br /></i><br />
<i>Usage scenario 2</i>: I download a web form, print it, fill it in, sign it, scan it on my own scanner using Gimp and now I want to convert the image into a nice pdf file (either color or black & white) to send back over email.<br />
<br />
<i>Solution</i>: I save the original files (be it pdf or png) into a folder and use git to track it. Then create a simple reduce script to convert it to the final format (view it as a pipeline). Often I need to tweak one or two parameters in the pipeline.<br />
<br />
Here is a script for scenario 1:<br />
<script src="https://gist.github.com/2871625.js?file=reduce2.sh">
</script>
<br />
And here for scenario 2:<br />
<script src="https://gist.github.com/2871625.js?file=reduce1.sh">
</script>
<br />
There can be several unexpected surprises along the way. From my experience:<br />
<br />
<ul>
<li>If I convert png directly to tiff, sometimes the resolution can be wrong. The solution is to always convert to ppm (color) or pbm (black and white) first, which is just a simple file format containing the raw pixels. This is the "starting" format (so first I need to convert the initial pdf or png into ppm/pbm) and then do anything else. That proved to be very robust.</li>
<li>The <span style="font-family: 'Courier New', Courier, monospace;">tiff2pdf</span> utility proved to be the most robust way to convert an image to a pdf. All other ways that I have tried failed in one way or another (resolution, positioning, paper format and other things were wrong....). It can create multiple pages pdf files, set paper format (US Letter, A4, ...) and so on.</li>
<li>The linux <span style="font-family: 'Courier New', Courier, monospace;">convert</span> utility is a robust tool for cropping images, converting color to black and white (using a threshold for example) and other things. As long as the image is first converted to ppm/pbm. In principle it can also produce pdf files, but that didn't work well for me.</li>
<li>I sometimes use the <a href="http://unpaper.berlios.de/">unpaper</a> program in the pipeline for some automatic polishing of the images.</li>
</ul>
<div>
In general, I am happy with my solution. So far I was always able to get what I needed using this "pipeline" method.</div>Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com9tag:blogger.com,1999:blog-6568744196982634289.post-70474009642101808332012-01-29T22:55:00.000-08:002012-02-03T23:29:56.839-08:00Discussion about global warmingI have read the text <a href="http://www.thegwpf.org/images/stories/gwpf-reports/happer-the_truth_about_greenhouse_gases.pdf">The Truth About Greenhouse Gases</a> by <a href="http://www.princeton.edu/physics/people/display_person.xml?netid=happer">William Happer</a>, a physicist at Princeton. I liked it, so I <a href="https://plus.google.com/u/0/104039945248245758823/posts/AZeLqgLkh5J">posted</a> it to my Google+.
I was surprised by so many emotional responses. The post also made Michael Tobis to write a <a href="http://init.planet3.org/2011/08/truth-about-truth-about-greenhouse-gases.html">blog post</a> with his opinion.
<br/><br/>
I was not satisfied about the overall tone of the discussion. I am really just interested in factual arguments. As such,
I browsed through all the arguments against William's paper from the above discussion, and chose one well formulated question, that I think represents the most important objection: "In your paper you state at several places, that doubling the CO2 concentrations will only increase the temperature by 1 C. However, it is claimed (see for example Michael's post above)
that the increase will be around 2.5 C. Which number is correct and where is it coming from?" I wrote to William and asked
whether he would be willing to answer it. With his permission I am posting his answer here.
<br/><br/>
Answer: <a href="https://gist.github.com/raw/1702909/beb2244208bab8edb83481575647789975fd6af6/sensitivity.pdf">sensitivity.pdf</a>
<br/>
Images referenced in the answer: <a href="https://gist.github.com/raw/1702909/cb44d1731be2a66626976a8f1468556c9a66f427/image001.png">image001.png</a>, <a href="https://gist.github.com/raw/1702909/b81174acf0394d50651157b55a05bc4e4ae7a730/image002.png">image002.png</a>.
<br/>
Link referenced in the answer: <a href="http://www.thegwpf.org/best-of-blogs/4247-steve-mcintyre-closing-thoughts-on-best.html">http://www.thegwpf.org/best-of-blogs/4247-steve-mcintyre-closing-thoughts-on-best.html</a>
<br/><br/>
Edit: I was told that Blogger makes it really hard to comment under the article. You can discuss it at my G+ post about this:
<a href="https://plus.google.com/u/0/104039945248245758823/posts/PJeqx7GKtLg">https://plus.google.com/u/0/104039945248245758823/posts/PJeqx7GKtLg</a>Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com0tag:blogger.com,1999:blog-6568744196982634289.post-78816111692860977992012-01-26T10:49:00.000-08:002012-01-26T10:51:28.645-08:00When double precision is not enoughI was doing some finite element (FE) calculation and I needed the sum of the lowest 7 eigenvalues of a symmetric matrix (that comes from the FE assembly) to converge to at least 1e-8 accuracy (so that I can check calculation done by some other solver of mine, that calculates the same but doesn't use FE). In reality I wanted the rounded value to 8 decimal digits to be correct, so I really needed 1e-9 accuracy (but it's ok if it is let's say 2e-9, but not ok if it is 9e-9). With my FE solver, I couldn't get it to converge more than to roughly 5e-7 no matter how hard I tried. Now what?<br />
<br />
When doing the convergence, I take a good mesh and keep increasing "p" (the polynomial order) until it converges. For my particular problem, it is fully converged for about p=25 (the solver supports the order up to 64). Increasing "p" further will not increase the accuracy anymore, and the accuracy stays at the level 5e-7 for the sum of the lowest 7 eigenvalues. For optimal meshes, it converges at p=25, for not optimal meshes, it converges for higher "p", but in all cases, it doesn't get below 5e-7.<br /><br />
I know from experience, that for simpler problems, the FE solver can easily converge to 1e-10 or more using double precision. So I know it is doable, now the question is what the problem is: there<br />
are a few possible reasons:<br />
<br />
<ul>
<li>The FE quadrature is not accurate enough</li>
<li>The condition number of the matrix is high, thus LAPACK doesn't return very accurate eigenvalues</li>
<li>Bug in the assembly/solver (like single/double corruption in Fortran, or some other subtle bug)</li>
</ul>
When using the same solver for simpler potential, it converged nicely to 1e-10. So this suggests there is no bug in the assembly or solver itself. It is possible that the quadrature is not accurate enough, but again, if it converges for simple problem, it's probably not it. So it seems it is the ill conditioned matrix, that causes this. So I printed the residuals (that I simply calculated in Fortran using the matrix and the eigenvectors returned by LAPACK), and it only showed 1e-9. For simpler problems, it can go to 1e-14 easily. So that must be it. How do we fix it?<br />
<br />
Obviously by making the matrix less ill conditioned, which is caused by the mesh for the problem (the ratio of the longest/shortest elements is 1e9) but for my problem I really needed such a mesh. So the other option is to increase the real number accuracy.<br />
<br />
In Fortran all real variables are defined as real(dp), where dp is an integer defined at a single place in the project. There are several ways to define it, but it's value is 8 for gfortran and it means double precision.
So I increased it to 16 (quadruple precision), recompiled. Now the whole program calculates in quadruple precision (more than 30 significant digits). I had to recompile LAPACK using the "-fdefault-real-8" gfortran option, that promotes all double precision numbers to quadruple precision, and I used the "d" versions (double precision, now promoted to quadruple) of LAPACK routines.
<br /><br />
I rerun the calculation ---- and suddenly LAPACK residuals are around 1e-13, and the solver converges to 1e-10 easily (for the sum of the lowest 7 eigenvalues). Problem solved.
<br /><br />
Turning my Fortran program to quadruple precision is as easy as changing one variable and recompiling. Turning LAPACK to quadruple precision is easy with a single gfortran flag (LAPACK uses the old f77 syntax for double precision, if it used real(dp), then I would simply change it as for my program). The whole calculation got at least 10x slower with quadruple. The reason is that gfortran runtime uses the libquadmath library, that simulates quadruple precision (as current CPUs only support double precision natively).
<br /><br />
I actually discovered a few bugs in my program (typically some constants in older code didn't use the "dp" syntax, but had the double precision hardwired). Fortran warns about all such cases, when the real variables have incompatible precision.
<br /><br />
It is amazing how easy it is to work with different precision in Fortran (literally just one change and recompile). How could this be done with C++? This wikipedia <a href="http://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format">page</a> suggests, that "long double" is only 80bit in most cases (quadruple is 128bit), but gcc offers __float128, so it seems I would have to manually change all "double" to "__float128" in the whole C++ program (this could be done with a single "sed" command).Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com8tag:blogger.com,1999:blog-6568744196982634289.post-68250084185378620442010-11-18T18:10:00.000-08:002010-11-18T18:52:55.589-08:00Google Code vs GitHub for hosting opensource projects<a href="http://cython.org/">Cython</a> is now considering options where to move the main (mercurial) repository, and <a href="http://www.math.washington.edu/~robertwb/">Robert Bradshaw</a> (one of the main Cython developers) has asked me about my experience with regards to <a href="http://code.google.com/hosting/">Google Code</a> and <a href="http://github.com/">GitHub</a>, since we use both with <a href="http://sympy.org/">SymPy</a>.<br /><br />Google Code is older, and it was the first service that provided free (virtually unlimited) number of projects that you could easily and immediately setup. At that time (4 years ago?) that was something unheard of. However, the GitHub guys in the meantime not only made this available too, but also implemented features, that (as far as I know) no one offers at all, in particular hosting your own pages at your own domain (but at GitHub's servers, some examples are <a href="http://sympy.org">sympy.org</a> and <a href="http://docs.sympy.org/">docs.sympy.org</a>), commenting on git branches and pull requests <span style="font-style:italic;">before</span> the code gets merged in (I am 100% convinced that this is the right approach, as opposed to comment on the code <span style="font-style:italic;">after</span> it gets in), allow to easily fork the repository and it has simply more social features, that the Google Code doesn't have.<br /><br />I believe that managing an opensource project is mainly a social activity, and GitHub's social features really make so many things easier. From this point of view, GitHub is clearly the best choice today.<br /><br />I think there is only one (but potentially big) problem with GitHub, that its issue tracker is very bad, compared to the Google Code one. For that reason (and also because we already use it), we keep our issues at Google Code with SymPy.<br /><br />The above are the main things to consider. Now there are some little things to keep in mind, that I will briefly touch below: Google Code doesn't support git and blocks access from Cuba and other countries, when you want to change the front page, you need to be an admin, while at GitHub I simply add push access to all sympy developers, so anyone just pushes a patch to this repository: <a href="https://github.com/sympy/sympy.github.com">https://github.com/sympy/sympy.github.com</a>, and it automatically appears on our front page (<a href="http://sympy.org/">sympy.org</a>), with Google Code we had to write long pages (in our docs) about how to send patches, with GitHub we just say, send us a pull request, and point to: <a href="http://help.github.com/pull-requests/">http://help.github.com/pull-requests/</a>. In other words, GitHub takes care of teaching people how to use git and figure out how to send patches, and we can concentrate on reviewing the patches and pushing them in. <br /><br />Wikipages at github are maintained in git, and they provide the webfrontend to it as <a href="https://github.com/github/gollum">opensource</a>, so there is no vendor lock-in. Anyone with github account can modify our wiki pages, while the Google Code pages can only be modified by people that I add to the Google Code project, which forced us to install mediawiki on my linode server (hosted at <a href="http://linode.com/">linode.com</a>, which by the way is an excellent VPS hosting service, that I have been using for couple of years already and I can fully recommend it), and I had to manage it all the time, and now we are moving our pages to the github wiki, so that I have one less thing to worry about.<br /><br />So as you can see, I, as admin, have less things to worry about, as github manages everything for me now, while with Google Code, I had to manage lots of things on my linodes.<br /><br />One other thing to consider is that GitHub is only for git, but they also provide svn and hg access (both push and pull, they translate the repository automatically between git and svn/hg), I never really used it much, so I don't know how stable this is. As I wrote <a href="http://ondrejcertik.blogspot.com/2010/10/git-has-won.html">before</a>, I think that git is the best tool now for maintaining a project, and I think that github is now the best choice to host it (except the issue tracker, where Google Code is better).Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com16tag:blogger.com,1999:blog-6568744196982634289.post-89336682591958594622010-10-31T00:26:00.000-07:002010-10-31T00:40:59.989-07:00git has wonI switched to git from mercurial about two years ago. See here why I <a href="http://ondrejcertik.blogspot.com/2008/08/i-am-switching-from-mercurial-to-git.html">switched</a> and here my experience <a href="http://ondrejcertik.blogspot.com/2008/12/experience-with-git-after-4-months.html">after 4 months</a>. Back then I was unsure, whether git will win, but I thought it has a bigger momentum. Well, I think that now it's quite clear that git has already won. Pretty much everybody that I collaborate with is using git now.<br /><br />I use github everyday, and now thanks to github <a href="http://help.github.com/pull-requests/">pull requests</a>, I think it's the best collaboration platform out there (compared to Google Code, Sourceforge, Bitbucket or Launchpad).<br /><br />I think it's partly because the github guys have a clear vision of what has to be done in order to make collaboration more easier and they do it, but more importantly that git branches is the way to go, as well as other git features, that are "right" from the beginning (branches, interactive rebase, and so on), while other VCS like bzr and mercurial simply either don't have them, or are getting them, but it's hard to get used to it (for example mercurial uses the "mercurial queues", and I think that is the totally wrong approach to things).<br /><br />Anyway, this is just my own personal opinion. I'll be happy to discuss it in the comments, if you disagree.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com22tag:blogger.com,1999:blog-6568744196982634289.post-73252503346733041852010-08-13T20:46:00.000-07:002010-08-13T21:05:47.407-07:00Week Aug 9 - 13On Monday I learned <a href="http://fwrap.sourceforge.net/">fwrap</a> (excellent piece of software btw), there were a few minor technical issues, that I communicated with Kurt on the fwrap mailinglist and I also send him a simple patch, so that it works fine for my fortran code (functions returning the value itself, instead of a tuple of length 1).<br /><br />Then I took my old fortran based shooting method solvers that I wrote couple years ago and wrapped them using fwrap and run couple simulations against my FE solver.<br /><br />On Tuesday we had a lunch with all the advisors and students and the llnl director and a little presentation about what we did.<br /><br />On Wednesday I run shooting method calculations for 50 states of silver, both for selfconsistent DFT potential and Z/r potential. I then also run the FE solver for the same DFT potential and compared results. There are lots of small technical issues, for example I had to use cubic splines to interpolate the potential, play with the mesh for the shooting method and so on.<br /><br />However, the shooting method and FE agrees to every single printed digit, after making sure that the mesh is ok for both methods. For all potentials that I tried. That's very cool.<br /><br />In the process of it, I also wrote a patch to SymPy to calculate exact energies for the Hydrogen atom, both from Schroedinger and Dirac equations. I still need to polish it a bit.<br /><br />On Thursday I run couple more calculations and setup a poster and had a poster session, it was two hours, and I think around 7 people (not counting other students and people from our group) stopped by and talked with me about it, so I was very happy. Being able to solve radial Schroedinger and especially Dirac equations robustly is something that several people in the lab would really need.<br /><br />Today I talked little bit (finally) about some Green functions in QM and QFT with a postdoc in the Quantum Simulations group, that I always wanted to, but didn't have time before, then packed my things and went back to Reno.<br /><br />My plan for the next week(s) is to wrap up what I did and put it into articles. I already have enough material for some articles, so it has to be done. In parallel, I'd like to finish the FE Dirac solver, the coding is done, but now I need to play with adaptivity and also investigate if we are getting the spurious states, that other people are getting when using b-splines.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com0tag:blogger.com,1999:blog-6568744196982634289.post-28965050297221575662010-08-06T18:02:00.000-07:002010-08-06T18:27:19.024-07:00Week Aug 2 - 6This week I essentially only worked on my LLNL poster, which I finally finished about two hours ago. I have created a web page for it:<br /><br /><a href="http://certik.github.com/ccms-10-poster/">http://certik.github.com/ccms-10-poster/</a><br /><br />where you can download pdf, sources, I also put there some relevant info and links.<br /><br /><br />It turned out to be a lot more work than I expected (well, as usual), but we were very thorough with John and in the process I discovered several bugs in my program, so I am glad we did it. I used to generate all the plots by hand, by manually adjusting all the parameters in the Python script (like atomic number, mesh parameters, element orders, adaptivity parameters, error tolerance and so on). Essentially I had to remember all these parameters for each of the plots (about 10 of them). Then I settled to have a Python dictionary, that holds all the parameters, and then I just pass them to a radial_schroedinger_equation_adapt(params, error_tol=1e-8) function.<br /><br />Here are example of the parameters:<br /><pre><br /> params_hydrogen_p_L = dict(l=0, Z=1, a=0, b=100, el_num=4, el_order=1,<br /> eig_num=3, mesh_uniform=False, mesh_par1=20, adapt_type="p",<br /> eqn_type="R")<br /> params_hydrogen_p_U = dict(l=0, Z=1, a=0, b=100, el_num=4, el_order=2,<br /> eig_num=3, mesh_uniform=True, adapt_type="p", eqn_type="R")<br /> params_hydrogen_hp_U = dict(l=0, Z=1, a=0, b=100, el_num=4, el_order=2,<br /> eig_num=3, mesh_uniform=True, adapt_type="hp", eqn_type="R")<br /> params_hydrogen_h_U = dict(l=0, Z=1, a=0, b=100, el_num=4, el_order=6,<br /> eig_num=3, mesh_uniform=True, adapt_type="romanowski",<br /> eqn_type="rR")<br /><br /> params_silver_p_L = dict(l=0, Z=47, a=0, b=150, el_num=4, el_order=13,<br /> eig_num=50, mesh_uniform=False, mesh_par1=35, adapt_type="p",<br /> eqn_type="R")<br /> params_silver_hp_L = dict(l=0, Z=47, a=0, b=150, el_num=4, el_order=13,<br /> eig_num=50, mesh_uniform=False, mesh_par1=35, adapt_type="hp",<br /> eqn_type="R")<br /></pre><br />I mean, this is kind of obvious if you think about it, but for some reason I didn't do that at all at the beginning, because I thought --- I'll just run this once and I am done with it. But I had to run it like 20x, e.g. regenerating he plots, then creating a table about meshes, then redoing the table after changing the error tolerance, and so on.<br /><br />Besides that I also got permission to release my code, so I'll go over it in the coming days and generate nice patches against Hermes1D.<br /><br />Also in the process of creating the poster, I played a lot with p-FEM, uniform-p-FEM, hp-FEM and h-FEM and I will keep playing with that. It's clear to me now, that our current Hermes1D is not optimal. Especially the convergence of hp-FEM and p-FEM (as it is implemented right now) greatly depends on the initial mesh.<br /><br />Nevertheless, even with the above limitations, hp-FEM seems to be really good if you don't a-priori know anything about the problem/mesh. One should not make any deep conclusions in 1D (it might be a bit different in 2D and 3D, and also I only did couple test problems), but from my experience so far, hp-FEM is a really good choice, if you just want to solve the problem and get a decent convergence (way better than h-FEM, and in general about the same as uniform-p-FEM with optimized mesh).<br /><br />Another conclusion is that uniform-p-FEM (also called spectral element method), if you optimize the mesh for the problem, is very fast. All you have to do is increase the polynomial order and it goes very straight on the convergence graphs, it's very hard to beat. Also, and that I would like to write in the coming days, the algorithm for optimizing the mesh is really simple: just solve it with high "p", then play with the mesh parameters (for logarithmic mesh, there are only 2 parameters --- number of elements, and a ratio of first vs. last element), so that the eigenvalues (that one is interested in) are converged (with given accuracy) and optimize it wrt DOFs. The algorithm can also "look" at the convergence graphs and make sure it's steep enough. For atomic problems, my experience shows that the logarithmic mesh is good enough (as long as you optimize it). The advantage is that you do this once, and then (for close enough potentials in the Schroedinger equation), you just increase "p", and it's very robust and fast (no need for reference mesh, or trial refinements and so on).<br /><br />When I get back to Reno, we'll do more research on hp-FEM with Pavel and I think this is not the last word to say. We need to review how we choose the candidates for eigenvectors, especially "p" vs "hp" and make it more robust. We'll see.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com0tag:blogger.com,1999:blog-6568744196982634289.post-61222667839607773172010-07-31T00:39:00.000-07:002010-07-31T00:49:27.857-07:00Week July 26 - 30This week I have been wrapping up my work at LLNL and trying to generate some comparisons between different approaches to adapt to multiple eigenvectors at once.<br /><br />It turns out that most of the issues are in the way we create the new mesh for the next adaptivity iteration, in particular, how do we choose which candidates to refine. I have tried to converge to the lowest eigenvector (as well as to any other eigenvector too), to the sum of the eigenvectors, to each eigenvector individually and taking the union of the meshes and so on. I have also implemented the uniform p-FEM as well as tried p-FEM and hp-FEM using the approaches I mentioned above.<br /><br />It seems to be crucial to have a good initial mesh, at least for p-FEM. If I use a good mesh and p-adaptivity, I am able to get the best results so far. In principle hp adaptivity should be at least as good, but our current approach doesn't show it yet. Hopefully we'll manage to make it work.<br /><br />Besides that I have also implemented H1 norms for the Function class (based on Fekete points) by calculating coefficients with regards to a FE basis and some other little things.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com0tag:blogger.com,1999:blog-6568744196982634289.post-2743963938647685852010-07-23T19:06:00.000-07:002010-07-23T19:27:53.194-07:00Week July 19 - 23This week I have learned how projections work in detail and wrote it up here:<br /><br /><a href="http://theoretical-physics.net/dev/src/math/la.html">http://theoretical-physics.net/dev/src/math/la.html<br /></a><br />including all proofs (that orthogonal projection finds the closest vector and so on). At the end of the notes I have calculated some examples in 1D, so that one can see that indeed it doesn't depend on the basis and that the basis doesn't even have to be orthogonal. Then one has to use this approach to calculate the coefficients:<br /><br /><a href="http://theoretical-physics.net/dev/src/math/la.html#nonorthogonal-basis">http://theoretical-physics.net/dev/src/math/la.html#nonorthogonal-basis<br /></a><br /><br />I have then implemented it in h1d. Due to the lack of time, I am now developing everything in my private branch. I'll obtain the permission in about 2 or 3 weeks, so then I'll push it into the master.<br /><br />Besides that I have implemented Chebyshev points for orders greater than 48, for which I don't have exact Fekete points anymore (it'd be just a matter of running my sympy script longer, but I was hitting some accuracy issues when solving those large polynomials numerically -- one needs to obtain all roots, so Chebyshev points are ok for now). So I can now represent arbitrary polynomials in 1D. <br /><br />I have implemented powering of the discrete function, it automatically determines which polynomial order it has to use and creates a new discrete function (the power) on the new mesh. I wrote lots of tests for that, and I hit an interesting bug, that my naive comparison code:<br /><pre><br />assert abs(x-y) < eps<br /></pre><br />was not good enough anymore for larger numbers and I had to read some documentation, and implement the following function:<br /><pre><br />@vectorize(0, 1)<br />def feq(a, b, max_relative_error=1e-12, max_absolute_error=1e-12):<br /> a = float(a)<br /> b = float(b)<br /> # if the numbers are close enough (absolutely), then they are equal<br /> if abs(a-b) < max_absolute_error:<br /> return True<br /> # if not, they can still be equal if their relative error is small<br /> if abs(b) > abs(a):<br /> relative_error = abs((a-b)/b)<br /> else:<br /> relative_error = abs((a-b)/a)<br /> return relative_error <= max_relative_error<br /></pre><br />Then I implemented the global H1 and L2 projections, so far the projected function is hardwired, I still need to allow the user to specify any discrete function to be projected. I need to precalculate it and so on.<br /><br />I wrote bunch of tests for the projections and powers and I always discovered some bugs by writing more tests, so the progress is slow, but at least I can trust the code that is tested.<br /><br />I also helped Pavel to fix couple segfaults, as well as some other things.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com2tag:blogger.com,1999:blog-6568744196982634289.post-11617440025488854832010-07-19T22:57:00.000-07:002010-07-20T00:00:39.254-07:00Theoretical Physics Reference BookToday I fulfilled my old dream --- I just created my first book! Here is how it looks like:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgasBbsj2FFU4dAOC05yYuBabfevpeMCIWxAVmRwIHgRYdrd-3BtiskVXk_pL6A-WVj5ehLxjUccc40fTEfyr49-I648lTuHmcKJn-kKIiNpdmv727Owv0YR5OFptjtjXhIjpyfN7qisNIG/s1600/Jul_19_2010_3169.jpg"><img style="cursor:pointer; cursor:hand;width: 300px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgasBbsj2FFU4dAOC05yYuBabfevpeMCIWxAVmRwIHgRYdrd-3BtiskVXk_pL6A-WVj5ehLxjUccc40fTEfyr49-I648lTuHmcKJn-kKIiNpdmv727Owv0YR5OFptjtjXhIjpyfN7qisNIG/s400/Jul_19_2010_3169.jpg" border="0" alt="" id="BLOGGER_PHOTO_ID_5495867261164653778" /></a><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjw5jUVDxhoNPJG9i9wo0OwZ2VjAbUylZCEDIELN2QwLS-z_GR3fQv6U08XJmpXuGYzoLKo_Hbc90RIpzxtd1vZ2l_r8HjnXbftGghRYyBZWOuvFJOH-rl-etQgWvVV3WEGRHeybllv8Q31/s400/Jul_19_2010_914.jpg" style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" border="0" alt="" id="BLOGGER_PHOTO_ID_5495867250591418786" /><br />More images<br /><a href="http://picasaweb.google.com/ondrej.certik/TheoreticalPhysicsReference">here</a>.<br /><br />Here is the source code of the book: <a href="http://github.com/certik/theoretical-physics">http://github.com/certik/theoretical-physics</a>, the repository contains a branch 'master' with the code and 'gh-pages' with the generated html pages, that are hosted at github, at the url <a href="http://theoretical-physics.net">theoretical-physics.net</a>.<br /><br />Then I published the book at Lulu: <a href="http://www.lulu.com/product/hardcover/theoretical-physics-reference/11612144">http://www.lulu.com/product/hardcover/theoretical-physics-reference/11612144</a>, I wanted a hardcover book, so I setup a project at Lulu, used some Lulu templates for the cover and that was it. Lulu's price for the book is $19.20 (166 black & white pages, hardcover), then I can set my own price and the rest of the money probably goes to me. I set the price to $20, because Lulu has free shipping for orders $20 or more. You can also download the pdf (for free) at the above link (or just use my git repository). So far this didn't cost me anything.<br /><br />I have then ordered the book myself (just like anybody else would, at the above address) and it arrived today. It's a regular hardcover book. Beautiful, you can browse the pictures above. It smells deliciously (that you have to believe me). And all that it cost me was $19.20.<br /><br />As for the contents itself, you can browse it online at <a href="http://theoretical-physics.net">theoretical-physics.net</a>, essentially it's most of my physics notes, that I collected over the years. I'd like to treat books like software --- release early release often. This is my first release and I would call it beta. The main purpose of it was to see if everything goes through, how long it takes (the date inside the book is July 4, 2010, I created and ordered it on July 5, got the physical book on July 19) and what the quality is (excellent). I also wanted to see how the pages correspond to the pdf (you can see for yourself on the photos, click on the picasa link above).<br /><br />Now I need to improve the first pages a bit, as well as the last pages, improve the index, write some foreword and so on. I also need to think how to better organize the contents itself and generally improve it. I also need to figure out some versioning scheme, so far this is version 0.1. I think I'll do edition 1, edition 2, edition 3, and so on. And whenever I feel that I have added enough new content, I'll just publish it as a new edition. So if you want to buy it, I suggest to wait for my 1.0 version, that will have the mentioned improvements.<br /><br />It'd be also cool to have all the editions online somehow and create nice webpages for it (currently theoretical-physics.net points directly to the book html itself).<br /><br />So far the book is just text. I still need to figure out how to handle pictures and also whether or not to use program examples (in Python, using sympy, scipy, etc.). So far I am inclining not to put there any program codes, as then I don't need to maintain them.<br /><br />Overall I am very pleased with the quality, up to some minor issues that I mentioned above, everything else end up just fine. I think we have come a long way from the discovery of the <a href="http://en.wikipedia.org/wiki/Printing_press">printing press</a>. Anybody can now create a book for free, and if you want to hold the hardcopy in your hands, it costs around $20. You don't need to order certain amounts of books, nor partner with some publisher etc. I think that's just awesome.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com12tag:blogger.com,1999:blog-6568744196982634289.post-1097305384006568172010-07-16T22:06:00.000-07:002010-07-16T22:16:38.022-07:00Week June 12 -- July 16This week I was implementing hp-adaptivity based on H1 projections first for the ground state and later for other states.<br /><br />It took me the whole week to debug things and I am still not done. There were issues with normalizing vectors and other problems. I am now able to converge any individual vector and that seems to work fine, but I am still not able to converge multiple vectors somehow.<br /><br />I spent also some time with speeding up my implementation of fekete points, and I have implemented the fast evaluation based on Lagrange interpolation polynomials. In general, I made for example the l2_norm() method about 10x faster. I think it's about as fast now as if it was written in C++ directly. The main loops are now optimized C without any Python C/API calls.<br /><br />My plan is now to implement taking squares of the solutions (exactly), and so on, and projecting it back to the original mesh. I'll try converging to that. At the same time, I'll try to converge to multiple eigenvalues. Hermes1d needs some improvements, that take me quite some time to implement.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com0tag:blogger.com,1999:blog-6568744196982634289.post-57225776778354371552010-07-10T12:48:00.000-07:002010-07-10T13:01:39.907-07:00Week June 5 -- July 9This week I have learned how to use the new C++ support in Cython, and posted a demo project at github (<a href="http://github.com/certik/cpp_test">http://github.com/certik/cpp_test</a>), then I have refactored the Python wrappers in hermes1d and I think things are now way cleaner.<br /><br />Then I have finished my reimplementation of Romanowski's algorithm for adapting eigenvalues for the radial Schroedinger equation, I've extracted his values from the graphs, calculated the same thing myself and it agrees perfectly.<br /><br />I am now polishing the h1 adaptivity in h1d, I essentially reimplemented it myself, so that I can consider more candidates (all possible combinations of "p" and "h", and also soon I'll consider bisecting, trisecting and so on of the interval).<br /><br />I've implemented hydrogen wavefunctions in sympy and use it to project onto a very fine mesh (12 order, lots of elements), convert to Fekete points and use it for the adaptivity. It's in pure Python, as I need to develop very fast to get some results and see what approach is the best. Everything is fast, except the selection of candidates, which I'll now rewrite into Cython and try to use hermes1d whenever possible.<br /><br />Next week I'll try to merge my adaptivity with hermes1d adaptivity and make it fast. And see how it converges.<br /><br />I've also spent about 10 hours with improving Pavel's hermes2d branch, as well as implementing the Vector class in hermes_common.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com0tag:blogger.com,1999:blog-6568744196982634289.post-85400724669261662502010-07-03T01:41:00.000-07:002010-07-03T01:53:56.612-07:00Week June 28 -- July 2On Saturday I flew to Prague, then to Pilsen to the <a href="http://hpfem.org/events/esco-2010/">ESCO2010</a> conference. I had to presentations there and I met lots of awesome people.<br /><br />I discussed with <a href="http://www.mathematik.uni-dortmund.de/~kuzmin/">Dmitri</a> how to solve the Euler equations using FEM and I think I know how to implement it now in hermes2d. I will try to give it a shot at the end of the summer, as this is something that was bothering me a lot.<br /><br />I have made a progress in my fekete points code, I also wrote analytic solutions for the hydrogen atom in sympy. I'll wrap it up over the weekend and push to h1d.<br /><br />I have investigated how to use github for collaboration using git, here is how it looks like for sympy: <a href="http://github.com/sympy">http://github.com/sympy</a>.<br /><br />I have figured out how to host any domain at github, in particular this:<br /><br /><a href="http://theoretical-physics.net/">http://theoretical-physics.net/<br /></a><br />now runs at github. I think this is really awesome.<br /><br />Besides that, I spent some time discussing how to do hp-adaptivity in h1d with Pavel and other people, as well as I helped couple people with hermes2d and so on.<br /><br />My plan for the next week is to implement hp-adaptivity in h1d for eigenproblems, as well as finish my fekete machinery and make it fast using cython and use it in conjunction with the h1d adaptivity.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com2tag:blogger.com,1999:blog-6568744196982634289.post-55889500295428410212010-06-25T22:49:00.000-07:002010-06-25T23:12:05.078-07:00Week June 20 -- June 25This week I implemented a Dirac solver in 1D, I have it in my private branch and I'll get a permission at the end of the summer to get it opensourced. Originally it didn't work at all, but then I've found a bug and it seems to be working now.<br /><br />Before it I've implemented an example for the y''+k^2*y=0 equation in hermes1d by writing it as two coupled first order equations and solving the eigenproblem for "k".<br /><br />I also started to implement a Function class that represents a function in 1D using Fekete points and I plan to implement orthogonal projections and adaptive algorithms for refining the mesh, so that I can play with adaptivity in 1D.<br /><br />Besides that I studied couple articles about using b-splines to solve the radial Dirac equation and also about using variational formulation for FEM.<br /><br />I helped fix hermes_common and did lots of administration at the lab (10 online courses and other things). We also got an awesome <a href="https://lasers.llnl.gov/">NIF</a> tour.<br /><br />Hermes1D patches:<br /><pre><br />Ondrej Certik (26):<br /> Remove the autogenerated Makefile<br /> hermes_common updated<br /> Dirac solver added<br /> Plot only 10 eigenfunctions<br /> Fix the number of equations bug<br /> Generate the Gauss Lobatto points<br /> Add the autogenerated file<br /> Don't use strings as dict keys<br /> Fekete points manipulation implemented<br /> fekete: solve the coefficients, tests pass<br /> More tests added<br /> system_sin_eigen example added<br /> Add a comment about the Lagrange interpolation<br /> Function.project_onto() implemented<br /> Function.plot() implemented<br /> Comparisons implemented<br /> Fix precision problems<br /> Improve tests<br /> Start implementing the adaptivity<br /> fekete: add a debug code<br /> Removing the Dirac solver for now<br /> Use smaller basis, while still getting reasonable results<br /> Add harmonic oscillator option<br /> hermes_common updated<br /> Use jsplot if it is available<br /> Better demo<br /></pre><br /><br />hermes_common patches:<br /><pre><br />Ondrej Certik (6):<br /> Revert "fixed pxd"<br /> Update the generated .cpp file<br /> Increase the precision in tests to 10^-10<br /> Add tests for solve_linear_system_dense_lu()<br /> Enable scipy tests<br /> Test for solve_linear_system_cg() added<br /></pre>Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com0tag:blogger.com,1999:blog-6568744196982634289.post-35688788824906428102010-06-18T21:04:00.000-07:002010-06-18T21:26:21.026-07:00June 12 -- June 18This week I have implemented a JSPlot library, which allows to use matplotlib API, but plot into your web browser:<br /><a href="http://github.com/certik/jsplot">http://github.com/certik/jsplot</a><br />(Scroll down a little bit to see the screenshots and examples.)<br /><br />This was the last missing piece to be able to develop with FEMhub (I don't have a root access to my computer, running RHEL5). Now I have the full Python stack working (scipy, all the solvers, mayavi, ...), plus nice plotting in the browser. <br /><br />I have spent several days investigating how to write weak forms for the radial Dirac equation as well as how to derive the action for it. It turns out, that one takes the Lagrangian from quantum electrodynamics (QED), converts it to spherical coordinates (which you can imagine is a hell of a job) and then integrates over angles and what remains is the Lagrangian (resp. action) for the radial Dirac equation. People used that in the literature a lot, but I just could not figure out where they take the functional from. So that part is clear and then I went to coding.<br /><br />Here are the commits for JSPlot:<br /><pre><br />Ondrej Certik (38):<br /> Initial commit<br /> Make index.html work<br /> README added<br /> Hook up raphael<br /> Serve raphael-min.js locally<br /> Plot simpler stuff<br /> Works fine<br /> plots work<br /> Download all the raphael stuff locally<br /> raphael.html added<br /> Use {% url %}<br /> Use simpler urls<br /> Typo fix<br /> Flotr demo added<br /> Use flotr in the index page<br /> Don't recalculate<br /> Add data by hand<br /> Generate the data in Python<br /> Add another flotr demo<br /> Example mpl plot added<br /> jsplot example added<br /> runserver.py added<br /> Hook it up with jsplot<br /> Add a screenshot<br /> Better README added<br /> README improved<br /> Show some testing data<br /> Use better testing data<br /> License added<br /> setup.py added<br /> Use testing data if data == []<br /> Just return from the function on CTRL-C<br /> spkg-install added<br /> prepend the .. path, instead of append<br /> Use the proper local files in ./manage.py<br /> Turn off points<br /> Fix a bug<br /> Add grid and legend<br /></pre><br />I fixed some scipy warnings in hermes2d:<br /><pre><br />ondrej@crow:~/repos/hermes2d(master)$ git weekreport <br />Ondrej Certik (4):<br /> hermes_common updated<br /> Fix the other SciPy warning in Python wrappers<br /> Update the generated .cpp file<br /> update hermes_common<br /></pre><br />I have fixed Python wrappers in hermes1d, made the Schroedinger example work again:<br /><pre><br />ondrej@crow:~/repos/hermes1d(master)$ git weekreport <br />Ondrej Certik (19):<br /> Use jsplot if available<br /> Build everything for now<br /> Make the Python wrappers work again<br /> Fix the lhs/rhs/residual to build properly<br /> Polish the forms a little bit<br /> Enable assembling<br /> Implement c2py_Mesh<br /> Use c2py_Mesh in the schroedinger example<br /> Update insert_matrix in Schroedinger<br /> Fixed the rest in Schroedinger<br /> copy_mesh_to_vector() and copy_vector_to_mesh() added to the Mesh class<br /> Use JSPlot<br /> Linearizer.get_xy() fixed<br /> Implement Mesh.copy_vector_to_mesh() and use it from get_xy()<br /> Use CooMatrix to assemble<br /> Implement pysparse solver<br /> Polish the numpy solver<br /> Polish the printing a little bit<br /> Rename assemble_jacobian() to assemble_matrix()<br /></pre><br />I fixed some build issues with FEMhub:<br /><pre><br />ondrej@crow:~/repos/femhub(master)$ git weekreport <br />Ondrej Certik (3):<br /> Add gnutls dependency for Python<br /> opencdk added<br /> Fix libgpg_error to build in parallel<br /></pre><br />and I have some more patches to the build system in my "pu" branch at github:<br /><pre><br />* 44398d8 (github/pu) Use absolute paths in "femhub -i"<br />* 2b17bf0 Fix "femhub --devel-install" to normalize the paths<br />* 3b41261 Make "femhub --shell" stay in the current directory<br /></pre><br />I still need to test that femhub builds fine with those (it always takes quite some testing to make sure it builds properly).Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com0tag:blogger.com,1999:blog-6568744196982634289.post-16323981343961692202010-06-11T23:44:00.000-07:002010-06-12T00:06:15.414-07:00week June 5 -- June 11On Sunday I moved to Livermore.<br /><br />On Monday I rebased my patches for Euler Equations, DG and FVM for hermes2d, sent for review. Unfortunately it took till Friday until they got reviewed, but they are now in.<br />I couldn't sleep, so at around midnight I turned on my computer and rewrote the build system for femhub, it took me about 1.5h. Then I could finally sleep well.<br /><br />On Tuesday I had my new hire orientation. During it I wrote couple more patches for femhub.<br /><br />On Wednesday I got my badge and spent the rest of the day with paperwork. They didn't manage to get my computer setup, so I studied a bit of Quantum Field Theory from a book that I found in our room and then spent the rest of the night figuring out the weak formulation for the radial Dirac equation. So far no luck, but at least I wrote it in many different forms in my hand written notes. Sometimes it's not bad to be cut off the internet and computers.<br /><br />On Thursday I got my computer account setup and spent most of the day with paperwork.<br /><br />On Friday I finally did some real work. Then we got to a pub, got home and drank the rest of my Budvars (Czechvar) and Plzeňs (Pilsner Urquell).<br /><br />The FEMhub build system is really cool. It's written in Python and I totally got rid of the old Sage build system. Some features of the new one:<br /><br /> * dependencies<br /> * automatically uses all your processors (unless told otherwise)<br /> * simple Python script (414 lines) that handles everything, plus sage-env, sage-spkg and sage-make_relative scripts (the rest I simply deleted)<br /> * allows you to install just some packages, for example "./femhub -i python" just installs python and it's dependencies<br /> * allows you to unpack any package into the devel/ directory and then build it<br /><br />I will not use it to develop hermes1d+schroedinger and dirac solvers over the summer, so it will likely get improved in the coming weeks. (I have to use it, because my computer is some old RHEL5, and I don't have a root access. With FEMhub, I have all the Python libraries plus cmake and similar stuff right there. It's really awesome.)<br /><br />In total, I wrote 52 patches for FEMhub, 3 for hermes_common, 9 for hermes2d, 8 for hermes1d and 1 for sympy:<br /><pre><br />FEMHUB:<br />Ondrej Certik (52):<br /> scipy and python upgraded<br /> Remove the old buildsystem<br /> New Python based buildsystem implemented<br /> Install hermes2d by default<br /> Compile in parallel<br /> Let CmdException propagate<br /> Handle dependencies<br /> Fix the PYTHONPATH issue<br /> Add a check that IPython is installed<br /> Add Cython dependency to hermes2d<br /> Add matplotlib to hermes2d deps<br /> Keep track of installed packages<br /> Add the -j option<br /> Add the rest of the packages<br /> Disable sphinx for now<br /> Add a todo item<br /> Disable mayavi for now<br /> Add pysparse to fipy deps<br /> Change the banner<br /> Fix a typo<br /> Remove old .hg files<br /> setuptools added into fipy's deps<br /> lab() implemented<br /> Don't let other import errors to pass silently<br /> Build bzip2 before Python<br /> Simplify the makefile<br /> Add the jinja2 package<br /> Remove one forgotten " from a message<br /> --shell and -s/--script options implemented<br /> Fix cmake so that it builds if old cmake is present<br /> expandvars() added to cmd()<br /> Remove the rest of old buildsystem files<br /> Move spd imports to femhub imports<br /> Move download_packages into the femhub buildsystem<br /> Remove a forgotten file from the old buildsystem<br /> Create the standard directory if it doesn't exists<br /> Polish the banner appearance<br /> femhub-run: little refactoring<br /> Use proper ipythonrc and matplotlibrc<br /> Wrap lab(), add debugging statements<br /> Don't import lab() in the ipythonrc<br /> Fix the missing "-p" when creating the standard directory<br /> Fix the problem with ambiguous names<br /> cpu_count refactored<br /> hermes1d added<br /> -f/--force option added<br /> --unpack added<br /> --pack option implemented<br /> --devel-install option implemented<br /> Use MAKEFLAGS instead of MAKE env variable<br /> Improve the FEMhub shell prompt<br /> Print info when unpacking<br /><br /><br />hermes2d:<br />Ondrej Certik (9):<br /> Removing the old-code directory<br /> hermes_common update<br /> Reformat the documentation in the Geom class<br /> Python wrappers updated<br /> Regenerate the _hermes2d.cpp<br /> plot.py: ScalarView updated<br /> Element orientation exported<br /> Sanity checks and docs<br /> Add all shapefunctions to the space L2<br /><br />hermes1d:<br />Ondrej Certik (8):<br /> spkg-install added<br /> "make install" implemented<br /> Make it build on the Mac<br /> Allow to turn off examples<br /> Use std::max(), fixed several warnings<br /> hermes_common updated<br /> Fix h1_polys.py to work with FEMhub's SymPy<br /> Fix the new line warning<br /><br />hermes_common:<br />Ondrej Certik (3):<br /> Add numpy2c_double_inplace() to _hermes_common.pxd<br /> Update the generated cpp file<br /> Fix a small double -> int bug/warning<br /><br />sympy:<br />Ondrej Certik (1):<br /> pyglet: fix string exceptions<br /></pre>Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com0tag:blogger.com,1999:blog-6568744196982634289.post-51170035496306186162010-06-04T14:59:00.000-07:002010-06-04T15:57:57.716-07:00week May 30 -- June 4Over the weekend and Monday I went to Livermore, CA to rent a room to live. On Monday and Tuesday my uncle visited me from Prague so I showed him Virginia City and Lake Tahoe, that was lots of fun.<br /><br />Then I tried to debug hermes1d and some matrix issues in there for couple hours, but failed so far.<br /><br />I've helped Sameer to fix h5py to compile with FEMhub.<br /><br />Yesterday I spent 12h setting up <a href="http://progit.org/book/ch4-7.html">gitosis</a> on our server and finally got it done, here is the web interface:<br /><br /><a href="http://git.hpfem.org/">http://git.hpfem.org/</a><br /><br />I had to patch the gitosis repository itself, here are my changes: <a href="http://github.com/certik/gitosis">http://github.com/certik/gitosis</a> (I've also sent them to Tommi Virtanen, the gitosis author, but I haven't heard back yet.)<br />In case you wanted to install it as well, I've posted instructions <a href="http://groups.google.com/group/hpfem-group/browse_thread/thread/d3e8da571c11d0d<br />">here</a>.<br /><br />I've figured out how to use msysgit with windows and use ssh keys to login to a linux box. It turns out it's not so trivial, see this <a href="http://code.google.com/p/msysgit/issues/detail?id=261#c41">issue</a>.<br /><br /><br />I submitted the following patches to our projects at <a href="http://hpfem.org">hpfem.org</a>:<br /><pre><br />ondrej@raven:~/repos/hermes1d(master)$ git weekreport <br />Ondrej Certik (3):<br /> Reverting the change dfa8580<br /> hermes_common updated<br /> hermes_common updated<br /><br />ondrej@raven:~/repos/hermes2d(master)$ git weekreport <br />Ondrej Certik (5):<br /> Fix "make install" to work again<br /> hermes_common updated<br /> remove hermes_common/doc/Makefile from .gitignore<br /> Remove _XOPEN_SOURCE and _POSIX_C_SOURCE hacks<br /> Fix the compilation warning<br /></pre><br />Btw, my "git weekreport" alias is defined as:<br /><pre><br />[alias]<br /> weekreport = shortlog --since=1.weeks --author=ondrej<br /></pre><br /><br />Next week I am moving to Livermore, and my summer internship will get started.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com1tag:blogger.com,1999:blog-6568744196982634289.post-36902736122265760812010-05-26T17:15:00.000-07:002010-05-26T17:20:51.575-07:00SymPy GSoC has startedThis summer SymPy got 5 students, you can see their proposals and blogs here:<br /><br /><a href="http://code.google.com/p/sympy/wiki/GSoC2010">http://code.google.com/p/sympy/wiki/GSoC2010</a><br /><br />All are required to blog once a week, the deadline is Friday night PST, and I'll do so too. <br /><br />This summer I will be at the <a href="http://www.llnl.gov/">Lawrence Livermore National Laboratory</a>, applying hp-FEM to radial Schroedinger and Dirac equations as well as 1D Density Functional Theory. I need to check with the management there how much I am allowed to blog about it. If I am, I'll keep blogging once a week too about it. If not, then only about sympy. I am super excited about the job, as it is in the electronic structure field, which is what I always wanted to do.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com0tag:blogger.com,1999:blog-6568744196982634289.post-5919328100928752102009-12-13T19:38:00.000-08:002009-12-13T20:49:51.310-08:00ESCO 2010 conferenceAn interesting conference <a href="http://hpfem.org/events/esco-2010/">2nd European Seminar on Coupled Problems</a> (ESCO) will be held on June 28 — July 2, 2010 in Pilsen, Czech Republic.<br />Among the <a href="http://hpfem.org/events/esco-2010/topics/">topics</a> are solving PDEs and applications and using Python for scientific computing. In particular, <a href="http://gael-varoquaux.info/">Gaël Varoquaux</a> is the keynote speaker.<br /><br />Unfortunately, it was later <a href="http://blog.jarrodmillman.com/2009/11/scipy-2010-coming-to-austin-tx-628-74.html">announced</a> that the SciPy 2010 conference is going to be at the same time, which is really unfortunate. But here are some reasons why you should consider going to ESCO 2010 instead:<br /><ul><li>If you like numerical calculations (finite elements, differences, volumes, ...) and solving partial differential equations and other problems and also programming in Python, together with C/C++ or Fortran, you will have a chance to meet some of the top people in the field. SciPy conference usually has people who solve PDEs (e.g. SciPy 09 had about 6), but ESCO 2010 will have about 60. So ESCO wins.</li><br /><li><a href="http://ntc.zcu.cz/en/staff/cimrman_r.html">Robert Cimrman</a>, who you probably know from the scipy and numpy mailinglists, also the author of the <a href="http://code.google.com/p/sfepy/">sfepy</a> FEM package in Python, lives in Pilsen, so he'll gladly show you some good Pilsen pubs. SciPy 2010 is going to be in Austin and while Austin has cool pubs too, I must be fair and I liked that (I was there at the <a href="http://ondrejcertik.blogspot.com/2008/03/sage-days-8.html">Sage 08 days</a>), but it's just not comparable, the beer is better in Pilsen, it's a historic city and there are more pubs.</li><br /><li>Pilsen is close to Prague, so you will have the chance to visit it. You should walk in the old town, have couple beers etc. Here you can see some photos that <a href="http://gael-varoquaux.info/journal/prague/index.html">Gaël took</a> when we met in Prauge. Again, this is incomparable with <a href="http://www.smallplanetguide.com/rentals/images/austin/soco_view.jpg">Austin</a>.</li><br /><li>It is held in the <a href="http://www.prazdroj.cz//en/come-and-visit/pilsen-brewery/lease-of-premises">Pilsner Urquell Brewery</a>. When <a href="http://hpfem.math.unr.edu/people/pavel/">Pavel Šolín</a> announced that at the SciPy 09 conference, <a href="http://blog.jarrodmillman.com/">Jarrod</a> asked "Ah, in a beer pub?". So let me be clear. The word <a href="http://en.wikipedia.org/wiki/Pilsener">pilsner</a> (type of the beer) is coming from the Czech city Pilsen (Plzeň in Czech). Pilsner Urquell is not some beer pub (e.g. even Reno where I live now has a beer pub), it's The Brewery. Austin is a cool place (and Texas steaks are really good), but as you can see now, it absolutely cannot compete with Pilsen.<br /></li></ul><br /><br />If you have time, I can fully recommend to go to ESCO 2010.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com4tag:blogger.com,1999:blog-6568744196982634289.post-88681154797583964412009-08-24T22:18:00.001-07:002009-08-24T23:04:31.069-07:00SciPy 2009 ConferenceI attended the <a href="http://conference.scipy.org/">SciPy 2009</a> conference last week at Caltech. <br /><br />When I first presented about <a href="http://sympy.org/">SymPy</a> at the scipy 07 conference exactly 2 years ago, it was just something that we started, so noone really used that. Last week, there were already 6 other people at the conference who contributed one or more patches to SymPy: Robert Kern, Andrew Straw, Pauli Virtanen, Brian Granger, Bill Flynn, Luke Peterson.<br /><br />I gave a SymPy <a href="http://www.archive.org/details/scipy09_advancedTutorialDay1_4">tutorial</a>, <a href="http://www.archive.org/details/scipy09_day1_13-Ondrej_Certik">main presentation</a> and <a href="http://dlpeterson.com/blog/">Luke</a> gave a <a href="http://pydy.org">PyDy</a> + SymPy <a href="http://www.archive.org/details/scipy09_day1_17-Lightning_talks_5-8">lightning talk</a> (seek to 16:04).<br /><br />I also gave my experience with designing a traits GUI for FEM <a href="http://www.archive.org/details/scipy09_day1_18-Lightning_talks_9-13">lightning talk</a> (seek to 5:35).<br /><br />My advisor <a href="http://hpfem.math.unr.edu/people/pavel/">Pavel Solin</a> gave a <a href="http://www.archive.org/details/scipy09_day2_03-Pavel_Solin">talk</a> about <a href="http://hpfem.org/main/hermes.php">Hermes</a> and <a href="http://femhub.org/">FEMhub</a> and other things that we do in <a href="http://hpfem.org/">our group in Reno</a>.<br /><br />Besides that, it was awesome to meet all the guys from the scientific Python community, meet old friends and also get to know other people that I only knew from the lists. I was pleased to meet there people who solve PDE using Python, we had many fruitful discussions together, and as a result, I already created FEMhub spkg packages for <a href="http://www.ctcms.nist.gov/fipy/">FiPy</a>, other will follow. Our aim is to create a nice interface to all of them, so that we can easily test a problem in any PDE package and see how it performs.<br /><br />Overall, my impression is very positive. I am very glad I chose Python as my main language many years ago, all the essential parts are now getting there, e.g. numerics (numpy, scipy, ...), symbolic (sympy, Sage, ...), 2d plotting (matplotlib, ...), 3d plotting (mayavi), GUI (traits UI, which supports both GTK, QT on linux and native widgets on Mac and Windows, and Sage notebook for web), excellent documentation tool with equations support (Sphinx), lots of supporting libraries, like sparse solvers and then very easy way to wrap C/C++ code using Cython and to speed up critical parts of the Python code using Cython. It's not that each of those libraries is the best in the world --- in fact, not a single one is --- but together as an ecosystem, plus the high level of (free) support on the lists for all of those libraries, this in my opinion makes Python a number one choice for scientific computing, together with C, C++ and sometimes Fortran for CPU intensive tasks and/or legacy libraries.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com4tag:blogger.com,1999:blog-6568744196982634289.post-52371371127472098032009-08-22T19:08:00.001-07:002009-08-22T23:51:04.277-07:00SymPy on the google phoneHere is a proof that sympy works on the google phone:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9yd5Sd-YKHaD8tI06ZuRSn-5ts97mePFpKr85c7M6rv5QQvHEVrWnRoLVK3IAhuLctxQPzji1Rc4_CxXZnDSva7Hgs_UwV0v3pVzbAg_v_jsvPfzvGHjH_-faxQw9AOjRyjdi_YRWOjs5/s1600-h/p2.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9yd5Sd-YKHaD8tI06ZuRSn-5ts97mePFpKr85c7M6rv5QQvHEVrWnRoLVK3IAhuLctxQPzji1Rc4_CxXZnDSva7Hgs_UwV0v3pVzbAg_v_jsvPfzvGHjH_-faxQw9AOjRyjdi_YRWOjs5/s400/p2.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5372975741225198850" /></a><br /><br />Thanks to Rae S. Yip from Caltech and Nicolas Pinto for taking the picture.Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com1tag:blogger.com,1999:blog-6568744196982634289.post-1357207604944904742009-08-15T16:32:00.001-07:002009-08-16T16:42:58.827-07:00Los Alamos SprintLast weekend, <a href="http://dlpeterson.com/blog/">Luke</a> and <a href="http://asmeurersympy.wordpress.com/">Aaron</a> came to visit me here in Los Alamos and we accomplished the following:<br /><br />* documentation doctests fixed<br />* pexpect wrappers to maxima<br />* couple match bugs fixed<br />* lots of patches reviewed and pushed in<br />* made pydy simplify trig expressions<br />* pexpect wrappers to autolev<br />* work on the odes module<br />* visited hot springs<br /><br />Pictures of the last item were requested, so Aaron (left), Luke:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyJpCWO1uuc-0aIyGqgTtEukKTC7kIsU79SiGjSXz5A4fY6-n41Jd8tmgQViWjMDHwmN_Bdb0B1Ua8m3ZNGNbJl8AX1XyGe1y8aUQgTZbg8g6A_BuqEBjeZ4nZBOx2NS6-PPLgeKJKvpBH/s1600-h/Ondrej+Visit+007.jpg"><img style="cursor: pointer; width: 300px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyJpCWO1uuc-0aIyGqgTtEukKTC7kIsU79SiGjSXz5A4fY6-n41Jd8tmgQViWjMDHwmN_Bdb0B1Ua8m3ZNGNbJl8AX1XyGe1y8aUQgTZbg8g6A_BuqEBjeZ4nZBOx2NS6-PPLgeKJKvpBH/s400/Ondrej+Visit+007.jpg" alt="" id="BLOGGER_PHOTO_ID_5370338431930294610" border="0" /></a><br /><br />Aaron, Ondrej:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrRCFHQi-mTYHCVKPXez1G0ttW6Z7COfDCUnmAXwF2V_HyEvcfCqYGQn3WlU42kE1s1BLfTshCwXAN9lUs22L2CJ2j59QES9Ce5za7YIYVSdZ-pJvROZzjrIQEkksWtYwkUAvI2wL0KKCL/s1600-h/Ondrej+Visit+006.jpg"><img style="cursor: pointer; width: 400px; height: 300px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrRCFHQi-mTYHCVKPXez1G0ttW6Z7COfDCUnmAXwF2V_HyEvcfCqYGQn3WlU42kE1s1BLfTshCwXAN9lUs22L2CJ2j59QES9Ce5za7YIYVSdZ-pJvROZzjrIQEkksWtYwkUAvI2wL0KKCL/s400/Ondrej+Visit+006.jpg" alt="" id="BLOGGER_PHOTO_ID_5370338417560177154" border="0" /></a><br /><br />Ondrej, Aaron:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgw9FjtCw-wrfYRjmtoL4amdIXBmiUjTBi5jt1GeI9wiwJKR9_dQ63hC_qdgv2RBmRlcV1g-xHreOd3t__-zEHiFIt_eAzAiFM4YQyFhQELFaMCrinm1KOZaVkM1YbruUJbGMWErQ-HDe9H/s1600-h/Ondrej+Visit+001.jpg"><img style="cursor: pointer; width: 300px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgw9FjtCw-wrfYRjmtoL4amdIXBmiUjTBi5jt1GeI9wiwJKR9_dQ63hC_qdgv2RBmRlcV1g-xHreOd3t__-zEHiFIt_eAzAiFM4YQyFhQELFaMCrinm1KOZaVkM1YbruUJbGMWErQ-HDe9H/s400/Ondrej+Visit+001.jpg" alt="" id="BLOGGER_PHOTO_ID_5370338409037429474" border="0" /></a>Anonymoushttp://www.blogger.com/profile/02845032202161204018noreply@blogger.com3