As I am required to develop all kinds of high-performance computing solutions, I am increasingly bothered by the limitations of Python’s Global Interpreter Lock and memory management. Its slow speed forces one to make weird workarounds so that the utilization of packages written in C can be maximized. This usually results in poor memory complexity and inferior speed. Recently, I learn about the Julia programming language, which targets to be a language that has excellent performance with high development efficiency. It is widely used among the scientific computing community. This series is a documentary of my Julia learning process.

 

  • Julia: Flajolet-Martin Algorithm

    I choose to pick up Julia because of its high performance. The problem for Python is that it works nicely 95 percent of the time, when all the computationally expensive operations can be replaced by function calls to C libraries. However, it fails completely when the rest 5 percent of nasty situations show up. Recently, such an annoying instance just showed up for me. I am trying to migrate my code to Julia to see if there is a performance boost.

  • Julia: Calling C Module

    In one specific task, I need to extract the DCT coefficients from a JPEG image and export them into Julia. Since there are a number of existing C libraries that are capable of doing this, it is more convenient to call them directly with Julia.

  • The Awkward Case of the Julia Language

    Since last year, I happily started to learn Julia, hoping that this promising language will drastically increase my work efficiency and change the way that I work. Unfortunately, I find out that the reality is hardly similar to what I expected. So far, my conclusion is that there is barely a situation where using Julia can consistently boost work efficiency.