Index ¦ Archives ¦ RSS

new in git-1.5.6: git cvsexportcommit -W

Estimated read time: 1 minutes

git-1.5.6 will be released soon (probably in a few weeks) and there are some interesting news in it.

one of them is the new git cvsimport -W switch which makes it easy to do bi-directional changes between git and cvs.

to set up your local repo:

$ CVSROOT=$URL cvs co module
$ cd module
$ git cvsimport

this will do a fresh checkout of the cvs module and will import it to git. you will have two interesting git branch: origin is the "reference" one, you should not touch it, and you can work in master.

you can commit to master, etc.

then there are two tricky operations:

first, you may want to commit back your local commits.

to do this:

$ for i in $(git rev-list --reverse origin..master)
do
        git cvsexportcommit -W -c -p -u $i
done

second, you may want to fetch upstream changes and rebase your local changes on top of them:

$ git cvsimport -i
$ git rebase origin

that's all.

cookies goes to Dscho in commit d775734. :)


interesting git talk

Estimated read time: 1 minutes

yesterday somebody mentioned on #git this talk. it's not a real video, just audio + slides but it's really nice. i would say if the "Linus one" made you say "heh, this may worth to check out" then this one will be the "hey, this one prevented me from learning things the hard way".

it's just one hour and it describes so many important tricks that i haven't encountered elsewhere yet.

just watch it.


fop 0.9x

Estimated read time: 2 minutes

uhm, this will be a long post, but i'll try to keep it short :)

a few words about fop. we write our documentation in asciidoc. asciidoc is plain text with a very simple markup, asciidoc can convert this to docbook. then docbook-xsl can convert this to .fo, finally fop can convert .fo to .pdf.

my problem with fop is that it's written in java and we just used the upstream binary. this is primarily a security problem.

so, about one and a half months ago got the crazy idea to compile fop from source. of course the correct way to do this is to package first the depends. this is really a avalanche, becase we didn't have too much generic java libs packaged, so i had to package many. namely:

jflex, piccolo, gnu.regexp, jarjar, jmock, qdox, easymock, hamcrest, iso-relax, relaxngdatatype, xsdlib, msv, xpp3, xpp2, gnu-crypto, apache-log4j, xmldb-api, ws-jaxme, dom4j, jdom, icu4j, jaxp, jaxp, xom, jaxen, rhino, batik, servletapi, jaf, gnuinetlib, gnumail, avalon-logkit, avalon-framework, commons-logging, commons-io and xmlgraphics-commons.

hm. that's 36. horrible ;)

the nice thing is that all these (except xmlgraphics-commons because classpath still lacks jpeg support) are compiled with the ecj/gcj toolchain, without any sun blob.

the other benefits are:

  • a native fop binary:
    $ file /usr/bin/fop
    /usr/bin/fop: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.0, dynamically linked (uses shared libs), stripped
  • now we got rid of fop-devel, since this version can both convert ttf fonts to xml ones (needed if you want to embed custom fonts into pdf) and convert fo documents to pdf ones.

yay!


message-ids

Estimated read time: 1 minutes

ok, this post will be a big generic, but it seems this is still totally new to some people. so, the Message-ID header in an email is ideally unique and you can easily use it to refer to an email in an other discussion.

in this post i want to deal with 3 issues:

first, how to display it in your mail client. ok, this depends on your mue, in mutt, you need to add

unignore message-id
to your muttrc.

second, if you want to search for a message-id in a folder, that's your mua's task as well. in mutt, you can do it by for example

~i 200804281829.11866.henrikau@orakel.ntnu.no

the third trick isn't mua-specific. if you want to link the message, and the list is indexed by gmane, then you can just type

http://mid.gmane.org/200804281829.11866.henrikau@orakel.ntnu.no
and it'll redirect to
http://article.gmane.org/gmane.comp.version-control.git/80566

ok, that's all for today :)


source highlight in asciidoc

Estimated read time: 1 minutes

i recently packaged source-highlight, and asciidoc can nicely use it. an example page (example codes using pacman-g2 bindings in 4 different languages) available here. yay! :)

ungreedy regex in javascript

Estimated read time: 1 minutes

a few days ago i wanted to use ungreedy regexs in javascript. first, let's see what an ungreedy regex is. look at the following example:

>>> "

foo

bar

".replace(/

f.*<\/p>/, '') ""

this is greedy. you want to get something like:

"

bar

"

right?

that would be ungreedy. in some other languages, there is a flag for this (php has 'U'), but in javascript, you need an other trick:

>>> "

foo

bar

".replace(/

f.*?<\/p>/, '') "

bar

"

and yes, that's what we wanted. also it works for .+?, and so on.

ah and as a side note, it seems '.' does not match newlines, so you'll have to work around it like:

>>> "

foo\nbar

baz

".replace(/

f[\s\S]*?<\/p>/, '') "

baz

"


being accepted in gsoc 2k8

Estimated read time: 1 minutes

ok, this is now official, i got paid for working on the C rewrite of git-merge during the summer ;)

just for fun, i collected some other projects with Hungarian students: samba, e17, freebsd, genmapp, xorg, drupal.


Kocka howto

Estimated read time: 4 minutes

Hogyan készült a kocka?

Először is kell 8 db fakocka. A méret tetszőleges, én 25 mm nagyságút használtam. Ezt fogom beborítani kívül-belül képekkel. Egész pontosan 3 db nagy kép és 6 db kis kép kell. A nagy képet úgy kell elképzelni, hogy 4 x 2 kocka (ill. kockának egy lapja) nagyságú, a kicsi meg 2 x 2. Ez azért praktikus így, mert 1 kockára való képet össze lehet tenni pont egy A4-es lapra amit ki lehet nyomtatni fóliára. A módszer nem tökéletes, mivel erről a fóliáról könnyen kopik a festék, így papír-írószer boltban kapható szeles celluxszal (az a nagy, 5 cm szeles) le kell ragasztani a fóliát felülről is. A Copy General 500 DPI-s nyomtatót használ, tehát egy kockalapra való képet 1100 px szélesre érdemes készíteni.

Az A4-es képet kézzel is el lehet készíteni, meg egy scripttel is -- de ez utóbbiról később. Ha kézzel akarjuk akkor így érdemes elrendezni:

1

Ha scripttel, akkor egy enkockam könyvtárban lesz 4es meg 8as nevű alkönyvtár és abba kerül a 3 db 8-as és 6 db 4-es kép. ezután kell a kocka.py:

$ ./kocka.py enkockam

Ez elkészíti az enkockam.jpg file-t.

A következő kérdés, hogy ezeket a nagy és kis képeket hogyan érdemes felvágni. Ha túl sok vagy kevés vágás lesz, akkor vagy nem lehet körbeforgatni a kockát, vagy szétesik a kocka. Színezem a képeket, hogy ne kavarodjanak össze.

Nézzük először a nagyobb képeket, sötétkék, világossárga és sötétzöld:

2

A kis képek vágása: piros, narancs, sárga, világoszöld, világoskék, krémsárga:

3

A kocka 6 lépésben körbeforog. Felragasztás: A 8 db kis kockát négyzetekbe rendezzük, és egymásra helyezzük. Tegyük fel a tetejére a narancsot, úgy felvágva, ahogy a rajzon látható. Húzzuk le a hátáról a fóliának a külső rétegét és ragasszuk is rá. Előttünk lesz a krémsárga oldal (szintén a rajzon látható módon, tehát a vágás vízszintesen van), balra a piros, jobbra a sárga (a vágás függőlegesen legyen, a rajzhoz képest 90 fok forgatással), hátul a kék (a vágás vízszintesen legyen), alul pedig a világoszöld (rajzhoz képest 180 fok forgatás, ugyanúgy jobbra lesz a kis rövid vágás mint a tetején a narancsnál).

Ha idáig eljutottunk, akkor (felülnézetből) a jobb alsó és a jobb felső hasábját ki lehet hajtogatni a kockának. (felülnézetből egy 4 kocka magas narancssárga csíkot kapunk). Jobboldalt az egész nagy felületet elfoglalja a sötétkék. Már csak a sötétzöld és a krémsarga maradt. Forgassuk el balra hossztengely szerint a nagy kockát, Tehát felfele fog nézni a nagy kék felület. Most nyissuk szét a nagy kockát úgy, hogy hosszába a bal felét balra, jobb felét jobbra forgatjuk 90 fokkal. Továbbra is téglatest lesz előttünk, de most már a kék rész jobb és baloldalon függőlegesen lesz, és nem borított rész tárul elénk. Ide kell a világossárga.

Már csak a sötétzöld maradt. A cél, hogy ez kívülre kerüljön. Menjünk vissza két lépést, amikor a kocka még egyben volt: összecsuk, újra kék lesz felül, forgat jobbra 90 fokot, narancs csík lesz felül, alul-felül összeforgat úgy, hogy kocka legyen újra. Most áthajtjuk a kocka felső felét jobbra úgy, hogy újra téglatestet kapjunk. A világossárgát fogjuk látni megint, és bal- meg jobboldalt a szélén pedig a kettészedett pirosat. Elforgatjuk az asztallapon a kockát 90 fokkal. Ugyanúgy ki fogjuk nyitni, mint mikor kékből sárgába mentünk át, csak most a sárgából csupasz lap lesz, és erre a 8 kockalapra fog kerülni a sötétzöld.

Frissítés: 2025-ben ezt a leírást követve úgy tűnik, hogy az utolsó kettő sorrendje fordított kell, hogy legyen, tehát sötétzöld az utolsó előtti és a világossárga az utolsó.


incremental bzr -> git conversion

Estimated read time: 1 minutes

i recently had problems with bzr -> git conversion using tailor and now Lele pulled my patches so here is a mini-howto about how i did the conversion.

i did all this in a ~/scm/tailor/bitlbee dir (to convert the bitlbee bzr repo), but of course you can do it somewhere else, too.

create the dir and place there the tailor config. mine is like:

$ cat bitlbee.conf [DEFAULT] verbose = True [bitlbee] target = git:target start-revision = INITIAL root-directory = /home/vmiklos/scm/tailor/bitlbee state-file = bitlbee.state source = bzr:source subdir = bitlbee.git [bzr:source] repository = /home/vmiklos/scm/tailor/bitlbee/bitlbee.bzr [git:target] repository = /home/vmiklos/scm/tailor/bitlbee/bitlbee.git

and here is the update script: $ cat update.sh #!/bin/sh -e cd dirname $0 cd bitlbee.bzr bzr pull cd .. tailor -c bitlbee.conf

to update the import daily i added the followings to my crontab:

40 4 * * * ~/scm/tailor/bitlbee/update.sh &>/dev/null

and we're ready, you'll have a daily updated git import.

one minor note: the bitlbee.git dir is a non-bare repo and it's also a bzr repo which is not a problem (you can clone it and gitweb handles it) but if you plan to switch to git later, you probably want to clone it once get rid of that junk :)


ten goals we reached in 2007

Estimated read time: 2 minutes

..continuing last year's article. so another year passed by and it's time to look back and see what we did during 2007. probably i miss a lot of stuff but here is my list:

1) ability to go back in the installer to a previous point if you missed something. do you remember the days when one had to reboot if he/she wanted to do so? :)

2) compiz improvements. this is now settled down in current and it's pretty sane. we cleaned up the old compiz and beryl, we have a single compiz-fusion, it has a nice step by step documentation and it works fine both for kde and gnome.

3) asciidoc. i think we highly improved our documentation since we switched from latex to asciidoc. a user manual of 98 pages in a nice pdf format is cute, isn't it? :)

4) newsletters. Alex started to issue newsletters and recently phayz helped out us, so it's alive again. i think it's something great.

5) yugo. 'factory' was our previous i686 build server, it was a very old machine with a cpu of 300mhz and so on. it was time to replace it and now yugo does the job.

6) fwlive. this was an old project but only test versions were available, based on old frugalware versions. now there is a live version of every released version of frugalware, thanks to janny, boobaa and ironiq. great!

7) gnetconfig. the first graphical config tool from priyank. i'm really bad at any graphical programming, so i'm glad to see finally we started to work on guis.

8) gfpm. something users always wanted and now it's here. a true graphical package manager, which is not just a wrapper but properly uses libpacman. awesome.

9) fun. this is our update manager which can sit on the system tray (or whatever i should call kicker not to be kde specific ;) ) and notifies you if there is something to update. i'm sure this is more comfortable compared to watching the -security mailing list for updates or doing a -Syu daily :)

10) syncpkgd2. if you remember, the old method was that there were only clients and they tried to figure out what to build, they built and uploaded the packages. this was very suboptimal: it allowed only one buildserver / arch and it was slow. okay, being slow is the smaller problem, but every buildserver was a single point of failure. nowadays we have two i686 buildservers (thanks to boobaa) and it's theoretically it's possible to have two x86_64 buildservers, too. so even if one i686 buildserver is down, i can be at the beach, sipping a mojito :)

© Miklos Vajna. Built using Pelican. Theme by Giulio Fidente on github.