Sunday, July 26, 2009

How to: Update Windows driver from command line



  • Download DevCon from here
  • Create list of drivers on your machine: devcon drivernodes * > drivers.txt
  • Find driver you want to update in the list, e.g. SoundMAX Integrated Digital Audio
  • You may check that you've selected the proper driver running
    devcon status "PCI\VEN_8086&DEV_24D5&SUBSYS_80F31043&REV_02" Please note that you should include driver ID in "quotation marks" and use part of string up to REV_02 (revision part).
  • run devcom updateNI using following parameters:
      Use name of inf file from Inf file is ...
      Use driver ID from line in the end of that section, but only up to and including" "Revision" part, e.g. PCI\VEN_8086&DEV_24D5&SUBSYS_80F31043&REV_02\3&267A616A&0&EA
  • Command line would be something like:
    devcon update "c:\windows\inf\oem0.inf" "PCI\VEN_8086&DEV_24D5&SUBSYS_80F31043&REV_02"

  • Monday, July 20, 2009

    TBB Performance


    My colleague who is working on optimization of quantitative calculations and plays around with Intel's thread building blocks (TBB) shared with me interesting performance results.

    Let's say you trying to do a simple math operation, e.g. sum numbers in arrays A and B into correspondent cells of array S.
    The intuitive guess would be that this is exactly type of operation that would benefit form running it in parallel on few threads/processors. Well, it's not and here are results (test was executed on two Intel's core Dell laptop):





    As you may see - running it using a straight loop is around four times faster than running it using TBB (selecting manual splitting of 1000 pieces) and around eight times faster (almost order of magnitude!) than running it using TBB and letting it employ automatic splitting heuristic.

    Our guess is that processor is perfectly fine-tuned for such types of task (locality of reference, L2 cache, optimistic pre-fetching of commands from pipe etc). If you employ few processors you introduce "coordination overhead" and pay performance price for it.

    It seems that TBB would provide performance benefits for tasks of a certain complexity balance - more complicated than described here, but still not "too complicate" so that coordination overhead is not too high...

    Here is the code if you want to check it out.

    Calling a member function for all STL map values (in a single line of code)


    int newSize = 50;
    typedef std::map<int, vector<double>> MyMap;

    // call vector.resize(newSize) for all values (pair->second) of STL map
    for_each(m_map.begin(), m_map.end(),
       tr1::bind(
         mem_fun_ref(&vector<double>::resize),
            tr1::bind(&MyMap::value_type::second, tr1::placeholders::_1),
            newSize));

    Thursday, July 16, 2009

    Sociological poll's results are clustered


    Occasionally I watch some TV news programs that do polling on various political and social issues. First of all - the fact of existence of such polls is curious by itself, since it depicts how many simple minded peoples there is out there happily paying for the phone calls to poll numbers just to express their opinion. In my view all those polls are nothing more that side revenue generation options for TV/phone corps and specifically designed for it.

    Anyway I've noted an interesting fact - those results have some clustering patterns - both in time and quantitative space and let me explain what I mean.
    Quantitatively - results are always clustered around the following ratios:
    98:2
    90:10
    2/3: 1/2
    1/2: 1/2

    I didn't done a massive statistic on it, but it's my observation. Isn't that curious?

    Now regarding "time clustering" - if poll results would be very biased, i.e. huge majority in favor of a certain option (like 98:2 or 90:10) - then usually during the very first seconds (something like 30 - 40 sec's) of the poll - the minority option would be the leading one! It's seems like people supporting an option which is really not favorable in view of the majority are reacting much more actively than the average citizen.

    My lecture on distributed computing


    Here is my lecture on distributed computing which I presented recently to my colleagues.
    (Some of the slides were taken from open materials that I've googled out.)
    I think that Distributed Computing/Grid/Clouds and related technologies are going to be in the center of next technological boom/bubble cycle.
    This is both because technology seems to be cheap and mature enough to crop the fruits and since it would be employed by Energy Grid / EnergyNET..