Quick jump:  What's new

 
Go to:
 
Weblog: TamTam   
in RazmjenaVjestina
CroPuppianTrazi

1.) Email & GNUPG - rijeseno !

(primjenu gpg-a razjasnio BuD. Enigmail radi medjutim samo pod Thunderbird-om >=2.0 ili Seamonkey mail-om >= 1.1. Ostaje da se isto pokusa napraviti pod Pine-om i Mutt-om).
Nakon repeticije gradiva koju je proveo MarcellMars odlucio sam se za instalaciju najbrzeg E-mail client-a kojeg znam, a koji vec odavno podrzava GNUPG i IMAP. Claws Mail 2.9.0 radi bez greske zahvaljujuci Hiroyuki Yamamoto iz Japana <hiro-y@kcn.ne.jp> te sam ga odmah postavio kao default-ni na svom Puppy 2.17.

2) Tiddlywiki Blog Tool

3) Prakticna iskustva s racunalima hladjenim vodom. Prihvacaju se i druga tehnicka rjesenja za tihi rad primjerana domacim uvjetima.

Odlucio sam se za hladjene zrakom, ugradio Big Tower(CHIEFTEC GX-01B-OP), navodno dobro hladi, ali huci li ga huci...sve bih dao da ga usutkam..

4) Linux boot s PCMCI CF kartice (Compact Flash). Gentoo je to navodno rijesio, ali ne znam kako ?

5) PXE Network boot sa stroja pod XP-om i/ili Ubuntu-om. Nasao sam principijelno rjesenje na. Sad ga je potrebno samo tehnicki i ostvariti. "Glomazni kernel" imam, a ostalo...snaci cemo se... - Nakon uspostave vrela distribucija na bljak-u ( vrelo distribucija demonstracija ) potrebno bi bilo puppy postaviti na bljak-ov pxe-boot. Pribavio sam glomazni initrd.gz kojeg bi sad trebalo editirati/pripasati zasto mi treba pomoc.

6) Koristenje Gentoo binarnih paketa na Puppylinux-u. Dakako trebalo bi nekako izvuci binarni paket prije instalacije. (na ostalim je linux-ima to prije - make install - kad se stvara puppy pet format).

Teoretski to je uredu, ali prakticki nema smisla. Mozda bi trebalo koristiti gotove Sabayon pakete ? Jedna od mogucnosti je i Gentoo radjen na Puppy jezgri. Ideja je vrlo interesantna i nalazi se na http://www.simplux.org.

7)Pomoc pri konfiguraciji kucne heterogene mreze od 3 racunala.

1. PC Intel Core 2 Quad Q6600 2,4 GHz ; 4 GB RAM ; GeForce8800 GTS; HDD 320 GB;
Osnovni je sustav XP game konzola. U postupku je instalacija 64 bitnog Ubuntu-a u dual boot-u

2. PC Intel Pentium 4 1,8 Ghz; 1Mb RAM-a; GeForce 6200; HDD 1x320 GB & 2x80 GB
dual boot Ubuntu Dapper Drake & XP . Printer HP 970 Cxi prikljucen je lokalno na paralelnom port-u.

3. Laptop IBM R32 pod Puppy Linux-om 3.1

Za sada mreza je konfigurirana kao peer to peer s prikljuckom na adsl preko wire router-a
Siemens C-010-i sa statickim IP adresama.
U prvom je koraku potrebno iskristalizirati osnovni koncept mreze serverskog tip-a.
Preferirao bi instalaciju ako je to moguce i vise Virtual Private Linux Server-a na jakoj masini te dijeljenje resursa jake masine kao terminal server-a ......

O svemu tome premalo znam .... pa zato i trebam pomoc...razmjenjivaca..

8) Konfiguraciju za Dynebolic Afro Linux CD i to verziju koja radi na Laptop-u IBM R32

Kao deklarirani korisnik javio se je DrGspot koji je ponudio demo CD. Izgleda obecavajuce s velikim razvojnim potencijalom...Potrebno odmah ubaciti driver za joystick tipkovnice te poboljsati koristenje RAM-a jer aplikacije trepere....

9) Tragam za poklonicima malih distribucija radi opcenite razmjene iskustava. Osim DrGspot(Dynebolic) i KruNo (DSL - Krunek) nisam jos nazalost sreo nikoga drugog-a.
----

original Oct 8 4:38am

permalink
PuppyMatricaVjestina

CroPuppianNudi (poveznica "nema" znaci samo da jos nisu napisane pisane smjernice)

Br. Vjestina Poveznica Primjedba
0 jos sam Windoza http://www.goosee.com/best/index.html ipak na svoj nacin
1 instalacija puppy-a boot-aj puppy cd za pocetak pritisni Enter
1.1 instalacija puppy na HD http://www.puppylinux.com/hard-puppy.htm provjeri na Grub promt-u >find /vmlinuz
1.2 USB Boot bez podrske BIOS-a http://www.murga-linux.com/puppy/viewtopic.php?t=16950 instalacija GRUB-a na CD
2 pokretanje puppy-a isto kao instalacija blabla
3 instalacija Liberation fontova http://www.linux.hr/modules/newbb/viewtopic.php?topic_id=750&forum=3&post_id=8542#forumpost8542 kompatibilnost s windows core fontovima
3.1 shortcut-ovi za tipkovnicu Menu - Desktop - JWM Configuration narihtaj
4 ircanje http://f0rked.com/articles/irssi xchat kao klasika, ali ipak pobjedjuje irsii
5 p2p mreze, torrent download nema nista lakse s pupctorrent
5.1 standardni download wget http://www.editcorp.com/personal/lars_appel/wget/wget_7.html#SEC31 da, da wget moze sve i vise od ...
6 konfiguracija pisaca nema lokalni i mrezni
7 instalacija standardnih alternativnih window manager-a Fluxbox, IceWM, XFCE ; http://www.tuxfiles.org/linuxhelp/changeman.html potrebno doinstaliranje dodatnih paketa
7.1 instalacija egzoticnih WM wmii, dwm, awesome ????; http://www.xwinman.org/basics.php
7.2 instalacija geeky WM ion, stumpwm ... ???
7.3 konfiguracija rvwm nema ???super light wm razmjene vjestina
7.4 pokreni vise window manager-a na istom host-u svakodnevnedovitljivostionelineri xnest u akciji
7.4 cool desktop ala RV nema prozirni usidreni terminal bez okvira, conky ali sve s JWM prema ideji dboto
8 konverzija audio formata nema nista lakse s Soxgui audio conversion
9 konverzija video formata nema ????
10 przenje cd-a i dvd-a nema lakse nego NERO
11 particioniranje i formatiranje nema nista lakse s Pdisk partition manager-om
11.1 zabunom sam izbrisao datoteku sa svog diska/usb-a http://www.cgsecurity.org/wiki/TestDisk_Download zbilja radi uvjerio me filippo
12 probijanje "zaboravljene" root zaporke nema ne provodi se jer puppy stiti svoj sadrzaj u enkriptiranoj datoteci
13 Gnupg nema upareno s enigmail-om kuzi BuD; s firefox extenzijom fireGPG moguca primjena i s GMAIL -MarcellMars
14 razmjena podataka s drugim PC-om nema MS Windows ili bilo koji drugi OS
15 Tiddly Wiki Blog http://www.picigin.net/logcells/; http://www.anshul.info/blogwiki.html kako poceti zna samo MarcellMars
15a GTD Tool ala Tiddly Wiki http://monkeygtd.tiddlyspot.com/ Jesi li za osobnu produktivnost ??
15b Wiki na USB stapicu http://sourceforge.net/projects/stickwiki vise o svemu PhilipB
16 Wordpress Blog tool nema instaliran lokalno na Puppy XAMPP serveru
17 Backup podataka nema cijele particije, ali moze i MS Windows sistemske datoteke
18 Instalacija Freemind software-a nema postoji dodatni paket za instalaciju koji zahtjeva Sun Java JRE.(DrGspot) Primjenu pri razvoju Web-a nudi HorzA
19 integracija hr lokalizacije http://lokalizacija.linux.hr po DrGspot-u potrebno se upoznati s prevođenjem na http://www.rilinux.hr/index.php?pid=dist
20 citanje MS chm datoteka ne treba Otvori Menu-Document-ChmSee
21 editiranje grafike nema instaliraj gimp pomocu pet installer-a
21.1 animirani gif http://kiberkomunist.wordpress.com/2007/09/19/intervju-u-glasu-istre-povodom-yaxwe-a/ ???
21.2 2D animacija nema ???
22 editiranje vektorske grafike nema inkscapelite je na standardnom cd-u
23 irfan viewer pod linuxom http://www.murga-linux.com/puppy/viewtopic.php?t=17359 da treba nam wine i najbolji preglednik radi besprijekorno dokazuje AcO
24 laptop projekcija nema isprobano sa sony projektorom, jednostavno radi "out of the box"
93 bluetooth ..npr mobitel i laptop.. na prvom uredjaju odredi zaporku, za ostalo pitaj KruNo
94 FLAC audio codec - instaliraj vlc player s flac codec-om
95 puppy na virtual masini-QEMU Puppy http://www.erikveen.dds.nl/qemupuppy/index.html http://www.erikveen.dds.nl/qemupuppy/images/qemupuppytje.gif kopiraj na USB i aktiviraj pod MS Windows/*nix na bilo kojem PC-u
96 TVNC http://www.tightvnc.com/ nista lakse Menu-Network-TightVNC
97 Koriscenje resursa jakog racunala na LAN-u pristupom preko laptop-a nema openssh server i BuD
98 mutt geek email client http://mutt.blackfish.org.uk/overview/ konfiguracija Budd
99 ssh tunneling http://www.suso.org/docs/shell/ssh.sdf; http://www.buzzsurf.com/surfatwork/; http://bud.bljak.org/?p=26 ...ali LesH zna, a BuD razmjenljuje
99a java ssh client http://www.appgate.com/products/80_MindTerm/ opla ..Java...i Vedran
100 tor nema ?????
101 Aplikacije midi produkcije http://www.murga-linux.com/puppy/viewtopic.php?p=127919&search_id=1919590073#127919 low-latency kernel patch, Ingo Molnar
101a Zamjeni tipkovnicu i misa gamepad-om http://git.savannah.nongnu.org/gitweb/?p=topot.git MarcellMars
101b Snimi 4 kanalnu glazbu nema MarcellMars i jack u akciji
102 kismet&aircrack http://dotpups.de/dotpups/Wifi/wireless-utilities/ ...dosta za wirelless ???pitajte BuD
103 ispis mape s web-a oliti tshark u akciji http://blog.rot13.org/2007/10/stitching_maps_together.html http://www.razmjenavjestina.org/[DpavLin]
104 koristi jedan mis za vise ekrana http://synergy2.sourceforge.net/about.html instaliraj synergy i gui quicksynergy isprobao BuD
105 Kreiraj vlastitu pristupnu tocku na LapTop-u s Atheros Wirelless karticom ethernet wifi bridge ipak ne zaboravi instalirati bridge-utils te pokrenuti dhcp server na lokalnoj mrezi - vjerujete smislio Dobrica, a potvrdio Bud :)
150 puppy s gentoo-om kao underdog linux-om http://www.puppyos.net/blog/index.php?entry=entry070325-044449 potrebna instalacija Sabayon-a i pomoc AkA - kao laksa ipak se pokazala instalacija Gentoo 2007.0 Minimal CD-a uz 2 tjedna konfiguracije


original Dec 15 5:40am

permalink
DpavLin

Dobrica Pavlinušić

http://www.gravatar.com/avatar/1a717634ca43c3c2af9a9124d0a89c28?.jpg

Linux hacker since 1995.

Razmjenjivač od 2006.

Stranice: http://www.rot13.org/~dpavlin/

Blog: http://blog.rot13.org/

Privatni wiki: http://wiki.rot13.org/rot13/

e-mail adresa: (yeah right spam bots!)

Best of na ovom wikiju

Postovi sa mog bloga

  • Stable ip on usb interface with changing mac

    Taming the PinePhone's Fickle USB Ethernet

    I use my PinePhone with postmarketOS as a mini-modem and SSH target. The goal is simple: plug the phone into my Debian laptop via USB, get a predictable IP address on it, and SSH in. This should be trivial. It was not.

    If I rebooted the phone while it was plugged in, the interface would die and not come back correctly.

    The first step is kernel log.

    # dmesg -w
    

    Plugging in the phone revealed the first layer of the problem. It doesn't just connect once. It connects, disconnects, and then connects again about 15-20 seconds later.

    [Sun Jul 13 15:33:28 2025] usb 1-9: new high-speed USB device number 36 using xhci_hcd
    [Sun Jul 13 15:33:28 2025] usb 1-9: New USB device found, idVendor=18d1, idProduct=d001
    [Sun Jul 13 15:33:28 2025] cdc_ncm 1-9:1.0 enx2e50df0e14da: renamed from eth0
    ...
    [Sun Jul 13 15:33:45 2025] usb 1-9: USB disconnect, device number 36
    ...
    [Sun Jul 13 15:33:46 2025] usb 1-9: new high-speed USB device number 37 using xhci_hcd
    [Sun Jul 13 15:33:47 2025] usb 1-9: New USB device found, idVendor=18d1, idProduct=d001
    [Sun Jul 13 15:33:47 2025] cdc_ncm 1-9:1.0 enxde9ceb943c18: renamed from eth0
    

    Notice two things:

    1. The device connects as number 36, then disconnects, then reconnects as number 37. This is the phone's OS re-initializing the USB stack after boot.
    2. The kernel first registers a generic eth0, which is then immediately renamed to a "predictable" name like enx2e50df0e14da.

    And to make matters worse, the MAC address is different on each reconnection. Any static configuration in /etc/network/interfaces based on a MAC address is doomed to fail.

    The clear solution is to use a udev rule to act when the device appears. The stable identifier we have is the USB vendor and product ID, which lsusb confirms:

    $ lsusb | grep Google
    Bus 001 Device 037: ID 18d1:d001 Google Inc. Nexus 4 (fastboot)
    

    (It identifies as a Nexus 4 in this mode, which is fine.)

    My first attempt was a simple udev rule.

    # /etc/udev/rules.d/99-pmos-network.rules (ATTEMPT 1 - WRONG)
    ACTION=="add", SUBSYSTEM=="net", ATTRS{idVendor}=="18d1", RUN+="/usr/local/bin/pm-net-configure.sh %k"
    

    This failed because of a race condition. The add action fires the moment eth0 is created, but before it's renamed to enx.... My script would be told to configure eth0, which ceased to exist a millisecond later.

    The key to solving udev timing issues is to stop guessing and start observing.

    # udevadm monitor --environment --udev
    

    Running this while plugging in the phone produces a firehose of information. After the final reconnection, deep in the output, was the golden ticket:

    UDEV  [8215599.005027] move /devices/pci.../net/enxde9ceb943c18 (net)
    ACTION=move
    DEVPATH=/devices/pci.../net/enxde9ceb943c18
    SUBSYSTEM=net
    INTERFACE=enxde9ceb943c18
    IFINDEX=39
    ID_VENDOR_ID=18d1
    ID_MODEL_ID=d001
    ...
    

    The system generates a move event when the interface is renamed. This event is perfect. It only happens after the rename, and it contains both the final interface name (%k or $env{INTERFACE}) and the USB device IDs we need for matching.

    This leads to the final, correct, and surprisingly simple udev rule.

    The Final Solution

    1. The Udev Rule

    This single rule triggers at the exact moment the interface is renamed to its stable name.

    Create /etc/udev/rules.d/99-pmos-network.rules:

    # Trigger on the network interface "move" (rename) event for the PinePhone.
    # This avoids all race conditions with initial device naming.
    ACTION=="move", SUBSYSTEM=="net", ENV{ID_VENDOR_ID}=="18d1", ENV{ID_MODEL_ID}=="d001", RUN+="/usr/local/bin/pm-net-configure.sh %k"
    

    2. The Configuration Script

    This is the script the udev rule calls. The %k in the rule passes the correct interface name (e.g., enxde9ceb943c18) as the first argument.

    Create /usr/local/bin/pm-net-configure.sh:

    #!/bin/sh
    set -e
    
    DEV="$1"
    IP_ADDR="172.16.42.2/24"
    PEER_IP="172.16.42.1"
    LOG_FILE="/tmp/pmos_net_config.log"
    
    # Simple logging to know what's happening
    echo "---" >> "$LOG_FILE"
    echo "$(date): udev 'move' event on '$DEV'. Configuring." >> "$LOG_FILE"
    
    # Give the interface a second to settle, then bring it up and set the IP.
    sleep 1
    ip link set dev "$DEV" up
    ip addr add "$IP_ADDR" dev "$DEV"
    
    echo "$(date): Successfully configured $DEV" >> "$LOG_FILE"
    

    And make it executable:

    # chmod +x /usr/local/bin/pm-net-configure.sh
    

    3. Reload and Test

    Tell udev to pick up the new rule.

    # udevadm control --reload
    

    Now, reboot the PinePhone. It will do its connect/disconnect dance. After the second connection, the move event will fire our rule.

    # ip a
    ...
    39: enxde9ceb943c18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc ...
        link/ether de:9c:eb:94:3c:18 brd ff:ff:ff:ff:ff:ff
        inet 172.16.42.2/24 scope global enxde9ceb943c18
           valid_lft forever preferred_lft forever
    ...
    
    # ping -c 2 172.16.42.1
    PING 172.16.42.1 (172.16.42.1) 56(84) bytes of data.
    64 bytes from 172.16.42.1: icmp_seq=1 ttl=64 time=0.885 ms
    64 bytes from 172.16.42.1: icmp_seq=2 ttl=64 time=0.672 ms
    

    It just works. Every time.

  • Protect Koha from web crawlers

    We live in troubling times, as web crawlers have become so prevalent in internet traffic that they can cause denial-of-service attacks on Koha instances.

    Simplest possible way to prevent this is following rule:

        <LocationMatch "^/cgi-bin/koha/(opac-search\.pl|opac-shelves\.pl|opac-export\.pl|opac-reserve\.pl)$">
            # Block requests without a referer header
            RewriteEngine On
            RewriteCond %{HTTP_REFERER} ^$
            RewriteRule .* - [F,L]
    
            # Optional: Return a 403 Forbidden status code
            ErrorDocument 403 "Access Forbidden: Direct access to this resource is not allowed."
        </LocationMatch>
    

    This helps to mitigate problems like this: apache_processes-week.png

  • dspace shibboleth config and debug

    Configuring shibboleth is always somewhat confusing for me, so I decided to write this blog post to document how configuration for dspace is done and debugged.

    This information is scattered over documentation and dspace-tech mailing list so hopefully this will be useful to someone, at least me if I ever needed to do this again.

    First step is to install mod-shib for apache: apt install libapache2-mod-shib Now there are two files which have to be modified with your information, /etc//shibboleth/shibboleth2.xml which defines configuration and /etc/shibboleth/attribute-map.xml to define which information will be passwd from shibboleth to application.

    attribute-map.xml

    Here we have to define headers which dspace expects, so it can get information from upstream idenitity provider.

    diff --git a/shibboleth/attribute-map.xml b/shibboleth/attribute-map.xml
    index 1a4a3b0..a8680da 100644
    --- a/shibboleth/attribute-map.xml
    +++ b/shibboleth/attribute-map.xml
    @@ -163,4 +163,34 @@
    </Attribute>
    -->

    + <!-- In addition to the attribute mapping, DSpace expects the following Shibboleth Headers to be set:
    + - SHIB-NETID
    + - SHIB-MAIL
    + - SHIB-GIVENNAME
    + - SHIB-SURNAME
    + These are set by mapping the respective IdP attribute (left hand side) to the header attribute (right hand side).
    + -->
    + <Attribute name="urn:oid:0.9.2342.19200300.100.1.1" id="SHIB-NETID"/>
    + <Attribute name="urn:mace:dir:attribute-def:uid" id="SHIB-NETID"/>
    + <Attribute name="hrEduPersonPersistentID" nameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic" id="SHIB-NETID"/>
    +
    + <Attribute name="urn:oid:0.9.2342.19200300.100.1.3" id="SHIB-MAIL"/>
    + <Attribute name="urn:mace:dir:attribute-def:mail" id="SHIB-MAIL"/>
    + <Attribute name="mail" nameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic" id="SHIB-MAIL"/>
    +
    + <Attribute name="urn:oid:2.5.4.42" id="SHIB-GIVENNAME"/>
    + <Attribute name="urn:mace:dir:attribute-def:givenName" id="SHIB-GIVENNAME"/>
    + <Attribute name="givenName" nameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic" id="SHIB-GIVENNAME"/>
    +
    + <Attribute name="urn:oid:2.5.4.4" id="SHIB-SURNAME"/>
    + <Attribute name="urn:mace:dir:attribute-def:sn" id="SHIB-SURNAME"/>
    + <Attribute name="sn" nameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic" id="SHIB-SURNAME"/>
    +
    </Attributes>

    shibboleth2.xml

    This is main configuration file for shibd. First we need to add OutOfProcess and InProcess to include useful information in shibd.log which are very useful.

    diff --git a/shibboleth/shibboleth2.xml b/shibboleth/shibboleth2.xml
    index ddfb98a..7b55987 100644
    --- a/shibboleth/shibboleth2.xml
    +++ b/shibboleth/shibboleth2.xml
    @@ -2,15 +2,46 @@
    xmlns:conf="urn:mace:shibboleth:3.0:native:sp:config"
    clockSkew="180">

    - <OutOfProcess tranLogFormat="%u|%s|%IDP|%i|%ac|%t|%attr|%n|%b|%E|%S|%SS|%L|%UA|%a" />
    + <!-- The OutOfProcess section contains properties affecting the shibd daemon. -->
    + <OutOfProcess logger="shibd.logger" tranLogFormat="%u|%s|%IDP|%i|%ac|%t|%attr|%n|%b|%E|%S|%SS|%L|%UA|%a">
    + <!--
    + <Extensions>
    + <Library path="odbc-store.so" fatal="true"/>
    + </Extensions>
    + -->
    + </OutOfProcess>

    +
    + <!--
    + The InProcess section contains settings affecting web server modules.
    + Required for IIS, but can be removed when using other web servers.
    + -->
    + <InProcess logger="native.logger">
    + <ISAPI normalizeRequest="true" safeHeaderNames="true">
    + <!--
    + Maps IIS Instance ID values to the host scheme/name/port. The name is
    + required so that the proper <Host> in the request map above is found without
    + having to cover every possible DNS/IP combination the user might enter.
    + -->
    + <Site id="1" name="sp.example.org"/>
    + <!--
    + When the port and scheme are omitted, the HTTP request's port and scheme are used.
    + If these are wrong because of virtualization, they can be explicitly set here to
    + ensure proper redirect generation.
    + -->
    + <!--
    + <Site id="42" name="virtual.example.org" scheme="https" port="443"/>
    + -->
    + </ISAPI>
    + </InProcess>
    +
    <!--
    By default, in-memory StorageService, ReplayCache, ArtifactMap, and SessionCache
    are used. See example-shibboleth2.xml for samples of explicitly configuring them.
    -->

    <!-- The ApplicationDefaults element is where most of Shibboleth's SAML bits are defined. -->
    - <ApplicationDefaults entityID="https://sp.example.org/shibboleth"
    + <ApplicationDefaults entityID="https://repository.clarin.hr/Shibboleth.sso/Metadata"
    REMOTE_USER="eppn subject-id pairwise-id persistent-id"
    cipherSuites="DEFAULT:!EXP:!LOW:!aNULL:!eNULL:!DES:!IDEA:!SEED:!RC4:!3DES:!kRSA:!SSLv2:!SSLv3:!TLSv1:!TLSv1.1">

    @@ -31,8 +62,8 @@
    entityID property and adjust discoveryURL to point to discovery service.
    You can also override entityID on /Login query string, or in RequestMap/htaccess.
    -->
    - <SSO entityID="https://idp.example.org/idp/shibboleth"
    - discoveryProtocol="SAMLDS" discoveryURL="https://ds.example.org/DS/WAYF">
    + <SSO
    + discoveryProtocol="SAMLDS" discoveryURL="https://discovery.clarin.eu">
    SAML2
    </SSO>

    @@ -68,6 +99,10 @@
    <!--
    <MetadataProvider type="XML" validate="true" path="partner-metadata.xml"/>
    -->
    + <MetadataProvider type="XML" url="https://login.aaiedu.hr/shib/saml2/idp/metadata.php" backingFilePath="aaieduhr-metadata.xml" maxRefreshDelay="3600" />

    <!-- Example of remotely supplied batch of signed metadata. -->
    <!--

    certificates

    To make upstream identity provider connect to us, we need valid certificate so we need /etc//shibboleth/sp-encrypt-cert.pem and /etc/shibboleth/sp-encrypt-key.pem.
    Since we are using Let's encrypt for certificates, I'm using shell script to move them over and change permissions so shibd will accept them.

    dpavlin@repository:/etc/shibboleth$ cat update-certs.sh
    #!/bin/sh -xe
    
    cp -v /etc/letsencrypt/live/repository.clarin.hr/privkey.pem sp-encrypt-key.pem
    cp -v /etc/letsencrypt/live/repository.clarin.hr/cert.pem sp-encrypt-cert.pem
    chown -v _shibd:_shibd sp-*.pem
    /etc/init.d/shibd restart
    

    dspace configuration

    We are using upstream dspace docker which includes local.cfg from outside using docker bind, but also re-defines shibboleth headers so we need to restore them to default names defined before in /etc/shibboleth/attribute-map.xml.

    diff --git a/docker/local.cfg b/docker/local.cfg
    index 168ab0dd42..6f71be32cf 100644
    --- a/docker/local.cfg
    +++ b/docker/local.cfg
    @@ -14,3 +14,22 @@
    # test with: /dspace/bin/dspace dsprop -p rest.cors.allowed-origins

    handle.prefix = 20.500.14615
    +
    +shibboleth.discofeed.url = https://repository.clarin.hr/Shibboleth.sso/DiscoFeed
    +
    +plugin.sequence.org.dspace.authenticate.AuthenticationMethod = org.dspace.authenticate.PasswordAuthentication
    +plugin.sequence.org.dspace.authenticate.AuthenticationMethod = org.dspace.authenticate.ShibAuthentication
    +
    +# in sync with definitions from /etc/shibboleth/attribute-map.xml
    +authentication-shibboleth.netid-header = SHIB-NETID
    +authentication-shibboleth.email-header = SHIB-MAIL
    +authentication-shibboleth.firstname-header = SHIB-GIVENNAME
    +authentication-shibboleth.lastname-header = SHIB-SURNAME
    +# Should we allow new users to be registered automatically?
    +authentication-shibboleth.autoregister = true

    example of working login

    ==> /var/log/shibboleth/shibd.log <==
    2024-11-18 17:18:40 INFO XMLTooling.StorageService : purged 2 expired record(s) from storage

    ==> /var/log/shibboleth/transaction.log <==
    2024-11-18 17:27:42|Shibboleth-TRANSACTION.AuthnRequest|||https://login.aaiedu.hr/shib/saml2/idp/metadata.php||||||urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST||||||

    ==> /var/log/shibboleth/shibd.log <==
    2024-11-18 17:27:42 INFO Shibboleth.SessionCache [1] [default]: new session created: ID (_e37d7f0b8ea3ff4d718b3e2c68d81e45) IdP (https://login.aaiedu.hr/shib/saml2/idp/metadata.php) Protocol(urn:oasis:names:tc:SAML:2.0:protocol) Address (141.138.31.16)

    ==> /var/log/shibboleth/transaction.log <==
    2024-11-18 17:27:42|Shibboleth-TRANSACTION.Login|https://login.aaiedu.hr/shib/saml2/idp/metadata.php!https://repository.clarin.hr/Shibboleth.sso/Metadata!303a3b0f72c5e29bcbdf35cab3826e62|_e37d7f0b8ea3ff4d718b3e2c68d81e45|https://login.aaiedu.hr/shib/saml2/idp/metadata.php|_3b8a8dc87db05b5e6abf8aaf9d5c67e6ebc62a2eed|urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport|2024-11-18T17:01:57|SHIB-GIVENNAME(1),SHIB-MAIL(2),SHIB-NETID(1),SHIB-SURNAME(1),persistent-id(1)|303a3b0f72c5e29bcbdf35cab3826e62|urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST||urn:oasis:names:tc:SAML:2.0:status:Success|||Mozilla/5.0 (X11; Linux x86_64; rv:134.0) Gecko/20100101 Firefox/134.0|141.138.31.16

    boken urf-8 in shibboleth user first name or surname

    You might think that now everything works, but dspace will try to re-encode utf-8 characters received from upstream shibboleth using iso-8859-1 breaking accented characters in process. It has configuration for it, but by default it's false.

    root@4880a3097115:/usr/local/tomcat/bin# /dspace/bin/dspace dsprop -p authentication-shibboleth.reconvert.attributes
    false

    So we need to modify dspace-angular/docker/local.cfg and turn it on:

    # incomming utf-8 from shibboleth AAI
    authentication-shibboleth.reconvert.attributes=true

  • WordPress comment spam - what to do about it and how

    Let's assume that you inherited WordPress installation (or three) with tens of instances (or hundreds in this case) which are generating spam using comments. I will try to describe problem here and suggest solution which doesn't require clicking in WordPress but instead using wp cli which is faster and easier especially if you don't have administrative account on all those WordPress instances. Interested? Read on.

    WordPress comment spam

    If you try googling around how to prevent WordPress comment spam, you will soon arrive at two solutions:

    • changing default_comment_status to closed which will apply to all new posts
    • changing comment_status on all existing posts to close
    However, this is not full solution, since media in WordPress can also have comments enabled, and those two steps above won't solve spam from media. There are plugins to disable media comments, but since we have many WordPress instances I wanted to find solution which doesn't require modifying each of them. And there is simple solution using close_comments_for_old_posts option which will basically do same thing after close_comments_days_old days (which by default is 14).

    So, in summary, all this can easily be done using following commands in wp cli:

    wp post list --post-status=publish --post_type=post --comment_status=open --format=ids \
            | xargs -d ' ' -I % wp post update % --comment_status=closed
    
    wp option update default_comment_status closed
    
    wp option update close_comments_for_old_posts 1
    

    If wp cli doesn't work for you (for example if your WordPress instance is so old that wp cli is returning errors for some plugins instead of working) you can achieve same thing using SQL (this assumes that wp db query is working, but if it doesn't you can always connect using mysql and login and password from wp-config.php):

    cat << __SQL__ | wp db query
    update wp_posts set comment_status='closed' where comment_status != 'closed' ;
    update wp_options set option_value = 'closed' where option_name = 'default_comment_status' and option_value != 'closed' ;
    update wp_options set option_value = 1 where option_name = 'close_comments_for_old_posts' and option_value != 1
    __SQL__
    
    This is also faster option, because all SQL SQL queries are invoked using single wp db query call (and this since php instance startup which can time some time).

    Cleaning up held or spam comments

    After you disabled new spam in comments, you will be left with some amount of comments which are marked as spam or left in held status if your WordPress admins didn't do anything about them. To cleanup database, you can use following to delete spam or held comments:

    wp comment delete $(wp comment list --status=spam --format=ids) --force
    
    wp comment delete $(wp comment list --status=hold --format=ids) --force
    

    Disabling contact form spam

    All spam is not result of comments, some of it might come through contact form. To disable those, you can disable comment plugin which will leave ugly markup on page without it enabled, but spams will stop.

    # see which contact plugins are active
    wp plugin list | grep contact
    contact-form-7  active  none    5.7.5.1
    contact-form-7-multilingual     active  none    1.2.1
    
    # disable them
    wp plugin deactivate contact-form-7
    

  • freeradius testing and logging

    If you are put in front of working radius server which you want to upgrade, but this is your first encounter with radius, following notes might be useful to get you started.

    Goal is to to upgrade system and test to see if everything still works after upgrade.

    radtest

    First way to test radius is radtest which comes with freeradius and enables you to verify if login/password combination results in successful auth.

    You have to ensure that you have 127.0.0.1 client in our case in /etc/freeradius/3.0/clients-local.conf file:

    client 127.0.0.1 {
        ipv4addr    = 127.0.0.1
        secret      = testing123
        shortname   = test-localhost
    }
    
    Restart freeradius and test
    # systemctl restart freeradius
    
    
    # radtest username@example.com PASSword 127.0.0.1 0 testing123
    
    Sent Access-Request Id 182 from 0.0.0.0:45618 to 127.0.0.1:1812 length 86
        User-Name = "username@example.com"
        User-Password = "PASSword"
        NAS-IP-Address = 193.198.212.8
        NAS-Port = 0
        Message-Authenticator = 0x00
        Cleartext-Password = "PASSword"
    Received Access-Accept Id 182 from 127.0.0.1:1812 to 127.0.0.1:45618 length 115
        Connect-Info = "NONE"
        Configuration-Token = "djelatnik"
        Callback-Number = "username@example.com"
        Chargeable-User-Identity = 0x38343431636162353262323566356663643035613036373765343630333837383135653766376434
        User-Name = "username@example.com"
    
    # tail /var/log/freeradius/radius.log
    Tue Dec 27 19:41:15 2022 : Info: rlm_ldap (ldap-aai): Opening additional connection (11), 1 of 31 pending slots used
    Tue Dec 27 19:41:15 2022 : Auth: (9) Login OK: [user@example.com] (from client test-localhost port 0)
    
    This will also test connection to LDAP in this case.

    radsniff -x

    To get dump of radius traffic on production server to stdout, use radsniff -x.

    This is useful, but won't get you encrypted parts of EAP.

    freeradius logging

    To see all protocol decode from freeradius, you can run it with -X flag in terminal which will run it in foreground with debug output.

    # freeradius -X
    
    If you have ability to run isolated freeradius for testing, this is easiest way to see all configuration parsed (and warnings!) and decoded EAP traffic.

    generating more verbose log file

    Adding -x to /etc/default/freeradius or to radius command-line will generate debug log in log file. Be mindful about disk space usage for additional logging! But to see enough debugging in logs to see which EAP type is unsupported like:

    dpavlin@deenes:~/radius-tools$ grep 'unsupported EAP type' /var/log/freeradius/radius.log
    (27) eap-aai: Peer NAK'd asking for unsupported EAP type PEAP (25), skipping...
    (41) eap-aai: Peer NAK'd asking for unsupported EAP type PEAP (25), skipping...
    (82) eap-aai: Peer NAK'd asking for unsupported EAP type PEAP (25), skipping...
    (129) eap-aai: Peer NAK'd asking for unsupported EAP type PEAP (25), skipping...
    (142) eap-aai: Peer NAK'd asking for unsupported EAP type PEAP (25), skipping...
    
    you will need to use -xx (two times x) to get enough debugging log. Again, monitor disk usage carefully.

    EAP radius testing using eapol_test from wpa_supplicant

    To test EAP we need to build eapol_test tool from wpa_supplicant.

    wget http://w1.fi/releases/wpa_supplicant-2.10.tar.gz
    
    cd wpa_supplicant-/wpa_supplicant
    $ cp defconfig .config
    $ vi .config
    
    CONFIG_EAPOL_TEST=y
    
    # install development libraries needed
    apt install libssl-dev libnl-3-dev libnl-genl-3-dev libnl-route-3-dev
    
    make eapol_test
    

    EAP/TTLS

    Now ne need configuration file for wpa_supplicant which tests EAP:

    ctrl_interface=/var/run/wpa_supplicant
    ap_scan=1
    
    network={
        ssid="eduroam"
        proto=WPA2
        key_mgmt=WPA-EAP
        pairwise=CCMP
        group=CCMP
        eap=TTLS
        anonymous_identity="anonymous@example.com"
        phase2="auth=PAP"
        identity="username@example.com"
        password="PASSword"
    }
    
    Now we can test against our radius server (with optional certificate test):
    # ./wpa_supplicant-2.10/wpa_supplicant/eapol_test -c ffzg.conf -s testing123
    
    and specifying your custom CA cert:
    # ./wpa_supplicant-2.10/wpa_supplicant/eapol_test -c ffzg.conf -s testing123 -o /etc/freeradius/3.0/certs/fRcerts/server-cert.pem
    
    This will generate a lot of output, but in radius log you should see
    Tue Dec 27 20:00:33 2022 : Auth: (9)   Login OK: [username@example.com] (from client test-localhost port 0 cli 02-00-00-00-00-01 via TLS tunnel)
    Tue Dec 27 20:00:33 2022 : Auth: (9) Login OK: [username@example.com] (from client test-localhost port 0 cli 02-00-00-00-00-01)
    

    GTC

    This seems like a part of tibial knowledge (passed to me by another sysadmin), but to make GTC work, change of default_eap_type to gtc under ttls and add gtc section:

            ttls {
                    # ... rest of config...
                    default_eap_type = gtc
                    # ... rest of config...
            }
    
            gtc {
                    challenge = "Password: "
                    auth_type = LDAP
            }
    
    and changing wpa-supplicant configuration to:
    CLONE dupli deenes:/home/dpavlin# cat eduroam-ttls-gtc.conf
    ctrl_interface=/var/run/wpa_supplicant
    ap_scan=1
    
    network={
            ssid="eduroam"
            proto=WPA2
            key_mgmt=WPA-EAP
            pairwise=CCMP
            group=CCMP
            eap=TTLS
            anonymous_identity="anonymous@example.com"
            phase2="autheap=GTC"
            identity="username@example.com"
            password="PASSword"
    }
    

    PEAP

    To make PEAP GTC work, I needed to add:

    diff --git a/freeradius/3.0/mods-available/eap-aai b/freeradius/3.0/mods-available/eap-aai
    index 245b7eb..6b7cefb 100644
    --- a/freeradius/3.0/mods-available/eap-aai
    +++ b/freeradius/3.0/mods-available/eap-aai
    @@ -73,5 +73,11 @@ eap eap-aai {
                    auth_type = LDAP
            }
    
    +       # XXX 2023-01-06 dpavlin - peap
    +       peap {
    +               tls = tls-common
    +               default_eap_type = gtc
    +               virtual_server = "default"
    +       }
    
     }
    
    which then can be tested with:
    network={
            ssid="wired"
            key_mgmt=IEEE8021X
            eap=PEAP
            anonymous_identity="anonymous@example.com"
            identity="username@example.com"
            password="PASSword"
    }
    

  • Local domains and caching bind server without Internet connection

    What do do when you have bind as caching resolver which forwards to your DNS servers which do recursive resolving and host primary and secondary of your local domains and upstream link goes down?

    To my surprise, caching server can't resolve your local domains although both primary and secondary of those domains are still available on your network and can resolve your domains without problem (when queried directly).

    That's because caching server tries to do recursive resolving using root servers which aren't available if your upstream link is down, so even your local domains aren't available to clients using caching server.

    Solution is simple if you know what it is. Simply add your local zones on caching server with type forward:

    zone "ffzg.hr" {
        type forward;
        forwarders {
            193.198.212.8;
            193.198.213.8;
        };
    };
    
    zone "ffzg.unizg.hr" {
        type forward;
        forwarders {
            193.198.212.8;
            193.198.213.8;
        };
    };
    
    This will work, since queries for those zones are no longer recursive queries, so they don't need root servers which aren't available without upstream link.

  • dovecot maildir on compressed zfs pool

    This is a story about our mail server which is coming close to it's disk space capacity:

    root@mudrac:/home/prof/dpavlin# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/vda1        20G  7.7G   11G  42% /
    /dev/vdb        4.0T  3.9T   74G  99% /home
    /dev/vdc        591G  502G   89G  85% /home/stud
    

    You might say that it's easy to resize disk and provide more storage, but unfortunately it's not so easy. We are using ganeti for our virtualization platform, and current version of ganeti has limit of 4T for single drbd disk.

    This can be solved by increasing third (vdc) disk and moving some users to it, but this is not ideal. Another possibility is to use dovecot's zlib plugin to compress mails. However, since our Maildir doesn't have required S=12345 as part of filename to describe size of mail, this solution also wasn't applicable to us.

    Installing lvm would allow us to use more than one disk to provide additional storage, but since ganeti already uses lvm to provide virtual disks to instance this also isn't ideal.

    OpenZFS comes to rescue

    Another solution is to use OpenZFS to provide multiple disks as single filesystem storage, and at the same time provide disk compression. Let's create a pool:

    zpool create -o ashift=9 mudrac /dev/vdb
    zfs create mudrac/mudrac
    zfs set compression=zstd-6 mudrac
    zfs set atime=off mudrac
    
    We are using ashift of 9 instead of 12 since it uses 512 bytes blocks on storage (which is supported by our SSD storage) that saves quite a bit of space:
    root@t1:~# df | grep mudrac
    Filesystem      1K-blocks       Used Available Use% Mounted on
    mudrac/mudrac  3104245632 3062591616  41654016  99% /mudrac/mudrac # ashift=12
    m2/mudrac      3104303872 2917941376 186362496  94% /m2/mudrac     # ashift=9
    
    This is saving of 137Gb just by choosing smaller ashift.

    Most of our e-mail are messages kept on server, but rarely accessed. Because of that I opted to use zstd-6 (instead of default zstd-3) to compress it as much as possible. But, to be sure it's right choice, I also tested zstd-12 and zstd-19 and results are available below:

    LEVELUSEDCOMPH:S
    zstd-6298797193318460%11:2400
    zstd-12298059111577659%15:600
    zstd-19297251484160059%52:600
    Compression levels higher than 6 seem to need at least 6 cores to compress data, so zstd-6 seemed like best performance/space tradeoff, especially if we take into account additional time needed for compression to finish.

    bullseye kernel for zfs and systemd-nspawn

    To have zfs, we need recent kernel. Instead of upgrading whole server to bullseye at this moment, I decided to boot bullseye with zfs and start unmodified installation using systemd-nspawn. This is easy using following command line:

    systemd-nspawn --directory /mudrac/mudrac/ --boot --machine mudrac --network-interface=eth1010 --hostname mudrac
    
    but it's not ideal for automatic start of machine, so better solution is to use machinectl and systemd service for this. Converting this command-line into nspawn is non-trivial, but after reading man systemd.nspawn configuration needed is:
    root@t1:~# cat /etc/systemd/nspawn/mudrac.nspawn
    [Exec]
    Boot=on
    #WorkingDirectory=/mudrac/mudrac
    # ln -s /mudrac/mudrac /var/lib/machines/
    # don't chown files
    PrivateUsers=false
    
    [Network]
    Interface=eth1010
    
    Please note that we are not using WorkingDirectory (which would copy files from /var/lib/machines/name) but instead just created symlink to zfs filesystem in /var/lib/machines/.

    To enable and start container on boot, we can use:

    systemctl enable systemd-nspawn@mudrac
    systemctl start systemd-nspawn@mudrac
    

    Keep network device linked to mac address

    Predictable network device names which bullseye uses should provide stable network device names. This seems like clean solution, but in testing I figured out that adding additional disks will change name of network devices. Previously Debian used udev to provide mapping between network interface name and device mac using /etc/udev/rules.d/70-persistent-net.rules. Since this is no longer the case, solution is to define similar mapping using systemd network like this:

    root@t1:~# cat /etc/systemd/network/11-eth1010.link
    [Match]
    MACAddress=aa:00:00:39:90:0f
    
    [Link]
    Name=eth1010
    

    Increasing disk space

    When we do run out of disk space again, we could add new disk and add it to zfs pool using:

    root@t2:~# zpool set autoexpand=on mudrac
    root@t2:~# zpool add mudrac /dev/vdc
    
    Thanks to autoexpand=on above, this will automatically make new space available. However, if we increase existing disk up to 4T new space isn't visible immediately since zfs has partition table on disk, so we need to extend device to use all space available using:
    root@t2:~# zpool online -e mudrac vdb
    

    zfs snapshots for backup

    Now that we have zfs under our mail server, it's logical to also use zfs snapshots to provide nice, low overhead incremental backup. It's as easy as:

    zfs snap mudrac/mudrac@$( date +%Y-%m-%d )
    
    in cron.daliy and than shipping snapshots to backup machine. I did look into existing zfs snapshot solutions, but they all seemed a little bit too complicated for my use-case, so I wrote zfs-snap-to-dr.pl which copies snapshots to backup site.

    To keep just and two last snapshots on mail server simple shell snippet is enough:

    zfs list -r -t snapshot -o name -H mudrac/mudrac > /dev/shm/zfs.all
    tail -2 /dev/shm/zfs.all > /dev/shm/zfs.tail-2
    grep -v -f /dev/shm/zfs.tail-2 /dev/shm/zfs.all | xargs -i zfs destroy {}
    
    Using shell to create and expire snapshots and simpler script to just transfer snapshots seems to me like better and more flexible solution than implementing it all in single perl script. In a sense, it's the unix way of small tools which do one thing well. Only feature which zfs-snap-to-dr.pl has aside from snapshot transfer is ability to keep just configurable number of snapshots on destination which enables it to keep disk usage under check (and re-users already collected data about snapshots).

    This was interesting journey. In future, we will migrate mail server to bullseye and remove systemd-nspawn (it feels like we are twisting it's hand using it like this). But it does work, and is simple solution which will come handy in future.

  • Track your configuration using git

    I have a confession to make: etckeeper got me spoiled. I really like ability to track changes in git and have it documented in git log. However, this time I was working on already installed machine which didn't have much files in /etc for etckeeper, but I wanted to have peace of mind with configuration in git.

    This got me thinking: I could create git in root (/) of file-system and than track any file using it. Since this is three servers I could also use other two nodes to make a backup of configuration by pushing to them.

    To make this working first I need to do init git repository and create branch with same name as short version of hostname (this will allow us to push and pull with unique branch name on each machine):

    # cd /
    # git init
    # git checkout -b $( hostname -s )
    
    With this done, all I have to do now is add and commit a file that I want to change (to preserve original version), make changes and commit it after change. To make first step easier, I created script which allows me to do git ac /path/to/file that will add file to git and commit original version in just one command (ac = add+commit).
    # cat /usr/local/bin/git-ac
    #!/bin/sh
    
    git add $*
    git commit -m $1 $*
    
    With this in place, I now have nice log of one server. Now it's time to repeat it on each machine and use git remote add host1 host1:/.git to add other hosts.

    Since I have some commits in branch with short hostname, it's also right moment to issue git branch -d master to remove master branch which we don't use (and will clutter out output later).

    We can fetch branches from other servers manually, but since we already have that information in git remote I wrote another quick script:

    # cat /usr/local/bin/git-f
    git remote | xargs -i git fetch {}
    
    With this I can issue just git f to fetch all branches on all hosts. If I want to push changes to other nodes, I can do git p which is similar script:
    # cat /usr/local/bin/git-p
    # disable push with git remote set-url --push pg-edu no_push
    
    git remote | xargs -i git push {} $( hostname -s )
    
    There is also a note how to disable push to some remote (if you don't want to have full history there, but want to pull from it).

    With this in place, you will get nice log of changes in git, and since every host hast branch of all other hosts, you can even use git cherry-pick to get same change on multiple hosts. Last useful hint is to use git branch -va which will show all branches together with sha of last commit which can be used to cherry pick last commit. If you need older commits, you can always issue git log on remote branch and pick up commit that you need.

    Last step is to add cron job in cron.daily to commit changes daily which you forgot to commit:

    # cat /etc/cron.daily/cron-commit
    #!/bin/sh
    
    cd /
    git commit -m $( date +%Y-%m-%dT%H%M%S ) -a
    
    With everything documented here, you have easy to use git in which you can track changes of any file on your file-system. There is one additional note: if file that you want to track is on nfs mount, you will need to add and commit it from outside of nfs mount (specifying full path to file on nfs) because if you are inside nfs mount git will complain that there is no git repository there.

  • mysql database with latin1 charset and utf8 data

    I know that it's 2021, but we are still having problems with encoding in mysql (MariaDB in this cane, but problem is smilar). This time, it's application which I inherited which saves utf-8 data into database which is declared as latin1.

    How can you check if this is problem with your database too?

    MariaDB [ompdb]> show create database ompdb ;
    +----------+------------------------------------------------------------------+
    | Database | Create Database                                                  |
    +----------+------------------------------------------------------------------+
    | ompdb    | CREATE DATABASE `ompdb` /*!40100 DEFAULT CHARACTER SET latin1 */ |
    +----------+------------------------------------------------------------------+
    
    Alternative way is to invoke mysqldump ompdb and example file generated. Why is this a problem? If we try SQL query on one of tables:
    MariaDB [ompdb]> select * from submission_search_keyword_list where keyword_text like 'al%ir' ;
    +------------+--------------+
    | keyword_id | keyword_text |
    +------------+--------------+
    |       3657 | alzir        |
    |       1427 | alžir       |
    +------------+--------------+
    
    You can clearly see double-encoded utf8 which should be alžir. This is because our client is connecting using utf8 charset, getting utf8 data in binary form so we see double-encoding. So we can try to conntect using latin1 with:
    root@omp-clone:/home/dpavlin# mysql --default-character-set=latin1 ompdb
    MariaDB [ompdb]> select * from submission_search_keyword_list where keyword_text like 'al%ir' ;
    +------------+--------------+
    | keyword_id | keyword_text |
    +------------+--------------+
    |       3657 | alzir        |
    |       1427 | alžir       |
    +------------+--------------+
    
    Note that everything is still not well, because grid after our utf8 data is not aligned well.

    Googling around, you might find that possible solution is to add --default-character-set=latin1 to mysqldump, edit all occurrences of latin1 to utf8 (utf8mb4 is better choice) and reload database, and problem is solved, right?

    If we try to do that, we will get following error:

    ERROR 1062 (23000) at line 1055 in file: '1.sql': Duplicate entry 'alžir' for key 'submission_search_keyword_text'
    
    Why is this? MySQL uses collation setting to remove accents from data, so it treats alzir and alžir as same string. Since we have both of them in our data, this is not good enough. Also, editing database manually always makes me nervous, so we will using following to get database dump without declaration of encoding (due to --skip-opt option), but using latin1 for dumping data:
    mysqldump ompdb --skip-set-charset --default-character-set=latin1 --skip-opt > /tmp/1.sql
    
    Next, we need to create database with collation which preserves everything (utf8mb4_bin) using:
    CREATE DATABASE omp2 CHARACTER SET = 'utf8mb4' COLLATE 'utf8mb4_bin' ;
    
    Finally we should be able to reload created dump without errors:
    mysql omp2 < /tmp/1.sql
    

    One additional benefit of using --skip-opt for mysqldump is that every insert is split into individual line. So if you want to have correct collation and skip data which is invalid (which might be possible depending on where data is) you can use same mysqldump file and add -f flag when reloading dump like mysql -f omp2 < /tmp/1.sql which will report data that have errors, but insert everything else into database.

  • request tracker where ldap users have multiple mail addresses

    We have been using request tracker for years but recently changed how many e-mail addresses we keep in LDAP mail attribute. Up until now, we stored just our local e-mail addresses there, but lately we also added external addresses that our users have.

    This created a problem when users try to send e-mail from external address to our rt. To test this, I have account usertest which has dpavlin@example.com as first mail in LDAP and dpavlin@m.example.com as second one and I'm sending e-mail from dpavlin@m.example.com like this:

    swaks --to sysadmin@rt.example.com --from dpavlin@m.example.com
    
    Result is following log which seems very verbose, but is also useful in understanding what is going wrong:


    [14188] [Fri Apr 16 07:57:26 2021] [debug]: Going to create user with address 'dpavlin@m.example.com' (/usr/local/share/request-tracker4/lib/RT/Interface/Email/Auth/MailFrom.pm:100)
    [14188] [Fri Apr 16 07:57:26 2021] [debug]: RT::Authen::ExternalAuth::CanonicalizeUserInfo called by RT::Authen::ExternalAuth /usr/local/share/request-tracker4/plugins/RT-Authen-ExternalAuth/lib/RT/Authen/ExternalAuth.pm 886 with: Comments: Autocreated on ticket submission, Disabled: , EmailAddress: dpavlin@m.example.com, Name: dpavlin@m.example.com, Password: , Privileged: , RealName: (/usr/local/share/request-tracker4/plugins/RT-Authen-ExternalAuth/lib/RT/Authen/ExternalAuth.pm:793)
    [14188] [Fri Apr 16 07:57:26 2021] [debug]: Attempting to get user info using this external service: FFZG_LDAP (/usr/local/share/request-tracker4/plugins/RT-Authen-ExternalAuth/lib/RT/Authen/ExternalAuth.pm:801) [14188] [Fri Apr 16 07:57:26 2021] [debug]: Attempting to use this canonicalization key: Name (/usr/local/share/request-tracker4/plugins/RT-Authen-ExternalAuth/lib/RT/Authen/ExternalAuth.pm:810)
    [14188] [Fri Apr 16 07:57:26 2021] [debug]: LDAP Search === Base: dc=ffzg,dc=hr == Filter: (&(objectClass=*)(uid=dpavlin@m.example.com)) == Attrs: co,uid,postalCode,physicalDeliveryOfficeName,uid,streetAddress,telephoneNumber,hrEduPersonUniqueID,cn,l,st,mail (/usr/local/share/request-tracker4/plugins/RT-Authen-ExternalAuth/lib/RT/Authen/ExternalAuth/LDAP.pm:358)
    [14188] [Fri Apr 16 07:57:26 2021] [debug]: Attempting to use this canonicalization key: EmailAddress (/usr/local/share/request-tracker4/plugins/RT-Authen-ExternalAuth/lib/RT/Authen/ExternalAuth.pm:810)
    [14188] [Fri Apr 16 07:57:26 2021] [debug]: LDAP Search === Base: dc=ffzg,dc=hr == Filter: (&(objectClass=*)(mail=dpavlin@m.example.com)) == Attrs: co,uid,postalCode,physicalDeliveryOfficeName,uid,streetAddress,telephoneNumber,hrEduPersonUniqueID,cn,l,st,mail (/usr/local/share/request-tracker4/plugins/RT-Authen-ExternalAuth/lib/RT/Authen/ExternalAuth/LDAP.pm:358)
    [14188] [Fri Apr 16 07:57:26 2021] [info]: RT::Authen::ExternalAuth::CanonicalizeUserInfo returning Address1: , City: Zagreb, Comments: Autocreated on ticket submission, Country: , Disabled: , EmailAddress: dpavlin@example.com, ExternalAuthId: usertest@example.com, Gecos: usertest, Name: usertest, Organization: , Password: , Privileged: , RealName: Testičić Probišić Đž, State: , WorkPhone: 014092209, Zip: (/usr/local/share/request-tracker4/plugins/RT-Authen-ExternalAuth/lib/RT/Authen/ExternalAuth.pm:869)
    [14188] [Fri Apr 16 07:57:26 2021] [crit]: User could not be created: User creation failed in mailgateway: Name in use (/usr/local/share/request-tracker4/lib/RT/Interface/Email.pm:243)
    [14188] [Fri Apr 16 07:57:26 2021] [warning]: Couldn't load user 'dpavlin@m.example.com'.giving up (/usr/local/share/request-tracker4/lib/RT/Interface/Email.pm:876)
    [14188] [Fri Apr 16 07:57:26 2021] [crit]: User could not be loaded: User 'dpavlin@m.example.com' could not be loaded in the mail gateway (/usr/local/share/request-tracker4/lib/RT/Interface/Email.pm:243)
    [14188] [Fri Apr 16 07:57:26 2021] [error]: Could not load a valid user: RT could not load a valid user, and RT's configuration does not allow for the creation of a new user for this email (dpavlin@m.example.com). You might need to grant 'Everyone' the right 'CreateTicket' for the queue SysAdmin. (/usr/local/share/request-tracker4/lib/RT/Interface/Email.pm:243)

    I'm aware that lines are long, and full of data but they describe problem quite well:

    1. RT tries to find user with e-mail address dpavlin@m.example.com (which doesn't exist since RT uses just first e-mail from LDAP which is dpavlin@example.com)
    2. then it tries to create new user with dpavlin@m.example.com, but runs another search over ldap to make sure it won't create duplicate user
    3. this will find user in ldap due to second email adress and gives wrong error message.
    As log file is very detailed and include path to files used and line numbers solution was simple additional check for this exact case:
    --- /usr/share/request-tracker4/plugins/RT-Authen-ExternalAuth/lib/RT/Authen/ExternalAuth/LDAP.pm.orig  2017-04-05 14:57:22.932000146 +0200
    +++ /usr/share/request-tracker4/plugins/RT-Authen-ExternalAuth/lib/RT/Authen/ExternalAuth/LDAP.pm       2021-04-16 15:49:34.800001819 +0200
    @@ -429,6 +429,12 @@
                                     $ldap_msg->code);
         }
    
    +    # FIXME -- dpavlin 2021-04-16 -- check if e-mail from ldap is same as incomming one
    +    if ( $key eq 'mail' && $value ne $params{EmailAddress}) {
    +       $RT::Logger->debug( "LDAP mail check return not found key = $key value = $value $params{EmailAddress}");
    +       $found = 0;
    +    }
    +
         undef $ldap;
         undef $ldap_msg;
    
    
    If e-mail address we found in LDAP is not the same one we did lookup on in CanonicalizeUserInfo we just ignore it.

    I think that nicely shows power of good logs and open source software written in scripting language which you can modify in the place for your (slightly broken) configuration.

  • openocd, raspberry pi and unknown stm32

    If you ever needed to connect to JTAG or SWD on stm32 and tried to search for solutions on Internet, you quickly realized that amount of information is overwhelming. However, fear not. If you have Raspberry Pi and few wires, you are already half-way there.

    stm32-swd-blob.jpg

    For me, this whole adventure started when I got non-working sensor which had swd header and blob over chip. This was not my first swd experiment. Thanks to great Hackaday Remoticon 2020 The Hackers Guide to Hardware Debugging by Matthew Alt I had already tried to connect using swd from Raspberry Pi to bluepill (which is stm32f103) so I had some experience with that. Now I also had unknown device so I can try what I can do with it.

    For a start, you can notice that device have UART TX and RX pins already soldered, so first step was to connect normal 3.3V serial to those pins and see if we have some output. And I did. I could see that it's contacting sensor chip and trying to initiate NBIoT connection, but fails. So next step was to solder SWD pins, and connect them to Raspberry Pi. For that, I created openocd configuration rpi4-zc-swd.cfg and uncommeted bottom of configuration to get first idea what chip is on the board (since it's covered with blob):

    swd newdap chip cpu -enable
    dap create chip.dap -chain-position chip.cpu
    target create chip.cpu cortex_m -dap chip.dap
    init
    dap info
    
    I did made some assumptions where, for example that chip is cortex_m, but since it has swd header, there was a good chance it was.

    However, since this sensor tries to get measurements in some configurable interval, just connecting using openocd didn't work since sensor after power up and sensor check went into sleep. While I could re-plug sensor repeatably, this is not needed since there is also rst pin (connected to pin 22 on Raspberry pi) which we can toggle from shell using:

    raspi-gpio set 22 op
    raspi-gpio get 22
    raspi-gpio set 22 dl
    raspi-gpio get 22
    raspi-gpio set 22 dh
    raspi-gpio get 22
    
    This woke up sensor again, and I was able to connect to it using openocd and was greeted with following output:
    root@rpi4:/home/pi/openocd-rpi2-stm32# openocd -f rpi4-zc-swd.cfg
    Open On-Chip Debugger 0.11.0+dev-00062-g6405d35f3-dirty (2021-03-27-16:05)
    Licensed under GNU GPL v2
    For bug reports, read
            http://openocd.org/doc/doxygen/bugs.html
    Info : BCM2835 GPIO JTAG/SWD bitbang driver
    Info : clock speed 100 kHz
    Info : SWD DPIDR 0x0bc11477
    Info : chip.cpu: hardware has 4 breakpoints, 2 watchpoints
    Info : starting gdb server for chip.cpu on 3333
    Info : Listening on port 3333 for gdb connections
    AP ID register 0x04770031
            Type is MEM-AP AHB3
    MEM-AP BASE 0xf0000003
            Valid ROM table present
                    Component base address 0xf0000000
                    Peripheral ID 0x00000a0447
                    Designer is 0x0a0, STMicroelectronics
                    Part is 0x447, Unrecognized
                    Component class is 0x1, ROM table
                    MEMTYPE system memory present on bus
    
    So, indeed this was STMicroelectronics chip, but unknown model. However, using Info : SWD DPIDR 0x0bc11477 and googling that I figured out that it's probably STM32L0xx which again made sense.

    So I started openocd -f rpi4-zc-swd.cfg -f target/stm32l0_dual_bank.cfg and telnet 4444 to connect to it and I was able to dump flash. However, I had to be quick since sensor will power off itself after 30 seconds or so. Solution was easy, I toggled again rst pin and connected using gdb which stopped cpu and left sensor powered on.

    However, all was not good since quick view into 64K dump showed that at end of it there was partial AT command, so dump was not whole. So I opened STM32L0x1 page and since mcu was LQFP 48 with 128k my mcu was STM32L081CB. So I restarted openocd -f rpi4-zc-swd.cfg -f target/stm32l0_dual_bank.cfg and got two flash banks:

    > flash banks
    #0 : stm32l0.flash (stm32lx) at 0x08000000, size 0x00010000, buswidth 0, chipwidth 0
    #1 : stm32l0.flash1 (stm32lx) at 0x08010000, size 0x00010000, buswidth 0, chipwidth 0
    
    So I was able to dump them both and got full firmware. It was also very useful, because at one point I did write flash in gdb instead in telnet 4444 connection and erased one of sensors which I was able to recover using dump which I obtained.

    This however, produced another question for me: since flash is same on all sensors, where are setting which can be configured in sensor (and wasn't changed by re-flashing firmware). Since chip also has 6k of eeprom this was logical place to put it. However, openocd doesn't have bult-in support to dump eeprom from those chips. However, I did found post Flashing STM32L15X EEPROM with STLink under Linux which modified openocd to support reading and writing of eeprom back in 2015 but is not part of upstream openocd.

    I didn't want to return to openocd from 2015 or port changes to current version, but I didn't have to. Since I was only interested in dumping eeprom I was able to dump it using normal mdw command:

    > mdw 0x08080000 1536
    
    1536 is number of 32-bit words in 6k eeprom (1536 * 4 = 6144). And indeed setting which are configurable where stored in eeprom.

    This was fun journey into openocd and stm32, so I hope this will help someone to get started. All configuration files are available at https://github.com/dpavlin/openocd-rpi2-stm32.

  • Grove Beginner Kit sensors show graphs using InfluxDB and Grafana

    grove-beginer-kit-for-arduino.png Several months ago, I got Grove Beginner Kit For Arduino for review. I wanted to see if this board would be good fit for my friends which aren't into electronics to get them started with it.

    So, I started with general idea: collect values from sensors, send them to InfluxDB and create graphs using Grafana. In my opinion, showing graphs of values from real world is good way to get started with something which is not possible without little bit of additional hardware, and might be good first project for people who didn't get to try Arduino platform until now.

    Kit is somewhat special: out of the box, it comes as single board with all sensors already attached, so to start using it, you just need to connect it to any usb port (it even comes with usb cable for that purpose). It also has plastic stand-offs which will provide isolation of bottom side from surface on which it's placed.

    It provides following sensors on board:

    ModulesInterfacePins/Address
    LED Digital D4
    Buzzer Digital D5
    OLEDDisplay 0.96" I2C I2C, 0x78(default)
    ButtonDigital D6
    Rotary Potentiometer Analog A0
    LightAnalog A6
    SoundAnalog A2
    Temperature & Humidity SensorDigital D3
    Air Pressure Sensor I2C I2C, 0x77(default) / 0x76(optional)
    3-Axis Accelerator I2C I2C, 0x19(default)

    So I decided to show temperature, humidity, pressure, light and sound. I also added ability to show measurements on built-in oled display if you press button. Why the button press? In my experience, oled displays are prone to burn-in, and since main usage of this sensor board will be sending data to the cloud, it would be wasteful to destroy oled display which won't be used most of the time.

    Programming Arduino sketch was easy using Groove Kit wiki pages which nicely document everything you will need to get you started. However, I noticed that wiki suggest to use Arduino libraries which have Grove in it's name, so I was wondering why is that so. Turns out that DHT11 temperature and humidity sensor and BMP280 temperature and pressure sensor use older version of Adafruit libraries which aren't compatible with latest versions on github. So, I tested latest versions from Adafruit and they work without any problems, just like Grove version. If you are already have them installed, there is no need to install additional Grove versions.

    If you deploy sensor like this (probably connected to small Linux single board computer) it would be useful if it would be possible to update software on it witout need to run full Arduino IDE (and keyboard and mouse), so I decided to write a Makefile which uses and installs arduino-cli which is go re-implementation of support which is available in Arduino IDE, but written in go that enables usage from command-line (over ssh for example).

    grove-grafana.png

    So if you are interested in trying this out, and want to get graphs similar to one above, go to GroveSensor github repository clone it to your Raspberry Pi, issue make to build it and make upload to send it to your board. You will also need to edit influx.sh to point it to your InfluxDB instance, and you can start creating graphs in Grafana. All this will also work on other platforms (like x86, amd64 or aarm64) thanks to arduino-cli install script.

  • ipmi serial console using grub and systemd

    I must admit that Linux administration is getting better with years. I was configuring IPMI serial console on old machines (but with recent Debian) so I decided to find out which is optimal way to configure serial console using systemd.

    First, let's inspect ipmi and check it's configuration to figure out baud-rate for serial port:

    root@lib10:~# ipmitool sol info 1
    Info: SOL parameter 'Payload Channel (7)' not supported - defaulting to 0x01
    Set in progress                 : set-complete
    Enabled                         : true
    Force Encryption                : true
    Force Authentication            : false
    Privilege Level                 : ADMINISTRATOR
    Character Accumulate Level (ms) : 50
    Character Send Threshold        : 220
    Retry Count                     : 7
    Retry Interval (ms)             : 1000
    Volatile Bit Rate (kbps)        : 57.6
    Non-Volatile Bit Rate (kbps)    : 57.6
    Payload Channel                 : 1 (0x01)
    Payload Port                    : 623
    
    Notice that there is 1 after info. This is serial port which is sol console. If you run ipmitool without this parameter or with zero, you will get error:
    root@alfa:~# ipmitool sol info 0
    Error requesting SOL parameter 'Set In Progress (0)': Invalid data field in request
    
    Don't panic! There is ipmi sol console, but on ttyS1!

    To configure serial console for Linux kernel we need to add something like console=ttyS1,57600 to kernel command-line in grub, and configuring correct serial port and speed:

    GRUB_TERMINAL=serial
    GRUB_SERIAL_COMMAND="serial --speed=57600 --unit=1 --word=8 --parity=no --stop=1"
    
    All required changes to default configuration are below:
    root@lib10:/etc# git diff
    diff --git a/default/grub b/default/grub
    index b8a096d..2b855fb 100644
    --- a/default/grub
    +++ b/default/grub
    @@ -6,7 +6,8 @@
     GRUB_DEFAULT=0
     GRUB_TIMEOUT=5
     GRUB_DISTRIBUTOR=`lsb_release -i -s 2< /dev/null || echo Debian`
    -GRUB_CMDLINE_LINUX_DEFAULT="boot=zfs rpool=lib10 bootfs=lib10/ROOT/debian-1"
    +# serial console speed from ipmitool sol info 1
    +GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS1,57600 root=ZFS=lib10/ROOT/debian-1"
     GRUB_CMDLINE_LINUX=""
    
     # Uncomment to enable BadRAM filtering, modify to suit your needs
    @@ -16,6 +17,8 @@ GRUB_CMDLINE_LINUX=""
    
     # Uncomment to disable graphical terminal (grub-pc only)
     #GRUB_TERMINAL=console
    +GRUB_TERMINAL=serial
    +GRUB_SERIAL_COMMAND="serial --speed=57600 --unit=1 --word=8 --parity=no --stop=1"
    
     # The resolution used on graphical terminal
     # note that you can use only modes which your graphic card supports via VBE
    
    So in the end, there is noting to configure on systemd side. If you want to know why, read man 8 systemd-getty-generator

  • Playing video on WS2812 panel

    It all started more than a week ago when I was given 10x10 panel of ws2812 leds designed to be broken apart into individual boards. I might have said at that moment: "It is a panel, it's just missing a few wires", and so this story begins...

    IMG_20200301_130434.jpg

    It took me the whole day to add those few wires and turn it into panel.

    IMG_20200301_163206.jpg

    I started testing it using Arduino Nano with wrong FastLED example (which supports just 4 ws8212) and wondered why I'm not getting the whole panel to light up. After some sleep, I tried Adafruit example, fixed one broken data out-in wire in middle of panel and I got this:

    Thumbnail image for IMG_20200302_105830.jpg

    So, playing video on this panel should be easy now, right?

    First, I had to make a choice of platform to drive the panel. While my 10x10 panel with 100 leds needed just 300 bytes for single frame, I didn't want to have a video sending device wired to it. So, esp8266 was logical choice to provide network connectivity to the panel without usb connection (which we still need, but just for power).

    At first, I took the Lolin Node MCU clone, which doesn't have 5V broken out (why?), and its VIN pin has a diode between USB 5V pin and VIN, and diode voltage drop is enough to make ws2812 dark all the time.
    Switching to Weemos D1 mini did help there, but what to run on it? I found some examples that where too clever for me (for 8x8 panel, they use jpeg and just decode single 8x8 block to show it which won't work for my 10x10 panel).
    After a bit of googling, it seems to me that https://github.com/Aircoookie/WLED project is somewhat of Tasmota for WS2812 on ESP8266, so I decided to use it. While it's not designed to support WS2812 matrix but simple stripes, it has UDP realtime control which enables it to send 302 byte UDP packet (300 bytes of RGB data and two byte header).

    So I started writing scripts which are at https://github.com/dpavlin/WLED-video to first convert video to raw frames using something as simple as ff2rgb.sh:

    dpavlin@nuc:/nuc/esp8266/WLED-video$ cat ff2rgb.sh
    #!/bin/sh -xe
    
    f=$1
    
    test ! -d $f.rgb && mkdir $f.rgb || rm -v $f.rgb/*.png
    ffmpeg -i $f -vf scale=10x10 $f.rgb/%03d.png
    ls $f.rgb/*.png | xargs -i convert {} -rotate 180 -gamma 0.3 -depth 8 {}.rgb
    
    To send frames I wrote simple send.pl script. I would have loved to be able to use bash udp support or some standard utility (like netcat or socat) to send frames, but null values in data didn't work well with shell pipes and I wasn't able to make it work.
    I also figured out that I have to modify gamma values for my frames so that colors are somewhat more correct (I had flame video which had blue hues on it without gamma correction). This is somewhat strange because WLED does have gamma correction for colors turned on, but it doesn't help and turning it off also doesn't help. So, gamma correction in pre-processing it is...

    And since I already had perl script to send UDP packets, I decided to open ffmpeg from it and make single script ff2wled.pl which sends video to panel like this:

    dpavlin@nuc:/nuc/esp8266/WLED-video$ ./ff2wled.pl rick.gif
    

    rick-panel.gif

    Was it all worth it? Honestly, no. The panel is small enough that video playback is really too much for such small resolution and it would be so much easier to buy ready-made panel with more leds. But, I did learn a few tricks with ffmpeg, and hopefully somebody else will benefit from this post.

  • BalCCon 2k19 - So, is Android a Linux?

    Last weekend I had pleasure to attend BalCCon 2k19 and present my talk So, is Android a Linux? which is embedded below. It was great conference and I hope that my talk gave some food for though and hints how to run Linux on Android devices.



Postovi sa blogova koji su meni interesantni

fetchrss: http://reblog.rot13.org/out/rss.php?user=1
  • There was an error: 404 Not Found

permalink
DraftGrad2007

__NAZIV PROGRAMA:

G33KOSKOP

 - Arheologija digitalnog nasljeđa kroz prizmu "geek" kulture

__SADRŽAJ PROGRAMA:

Odnos informacijsko-komunikacijskih tehnologija i kulture, shvaćene kao niz uglavnom društvenih, odnosno humanističkih simboličkih praksi, jedna je od najzanimljivijih tema među aktualnim teorijskim interpretacijama suvremenog svijeta. Ubrzani razvoj tehnologije prožima sve sfere suvremene kulture: od estetike, umjetnosti, preko specifičnih tema pozicija moći, raspolaganja (informacijskim/kulturnim) resursima, pitanja autorstva, originalnosti do integriteta djela i/ili autora.

Najčešća interpretacija odnosa između tehnologije i kulture, pokušaj je procjene u kolikoj mjeri se nešto staro transformiralo u susretu s nečim novim (kao posljedicom tehnološkog razvoja). Vrlo rijetko je u fokusu interesa kultura koja nastaje u izravnoj inetrakciji s novim, u ovom slučaju s razvojem digitalnih komunikacijskih tehnologija. Još rjeđe su u fokusu ljudi čiji je životni stil oblikovan novim komunikacijsko-digitalnim tehnologijama. Za takve ljude udomaćio se izraz - gik (eng. geek), nastao u SAD u drugoj polovici 20. stoljeća.

Značenje i konotacija pojma "geek" u zadnjih nekoliko desetljeća radikalno se mijenjala: u samim počecima razvoja računalnih tehnologija, "geek" je prije svega značio čovjeka (najčešće adolescenta) potpuno posvećenog novim tehnologijama, koji (prema stereotipu) nema razvijene socijalne vještine i društvo ga vidi kao "iščašenog", bizarnog i marginalnog.

Kako su se nove tehnologije prihvaćale od strane društva, znanje o novim tehnologijama koje su "geekovi" posjedovali postajalo je sve traženije i prihvatljivije. S vremenom je geek kultura bivala sve prihvaćenijom, a njezini pripadnici postali su napredni i progresivni članovi šireg društva.

Stvaranju mita o nad-geeku koji je nadprosječno inteligentan, izrazito vješt i samodovoljan, pogodavala je ekeonomska eksplozija u proizvodnji privatnih računala, u kojoj su se u vrlo kratkom vremenu mladi ljudi fascinatno obogatili. Taj mit uključuje razdoblje frustracije zbog društvenog odbacivanja kroz adolescenciju i kasniju (simboličku) osvetu u iznimnom bogaćenju i uspinjanju na društvenoj ljestvici.

Utjecaj geek kulture na popularnu kulturu u zadnjih dvadesetak godina gotovo je nesaglediv: cyberpunk, računalne igre, računalna animacija, mrežne kolaboracije, znanstvena fantastika, teorije zavjere, iščašeni humor, bizarni hobiji i kolekcionarstvo, samo su neka od područja na kojima geek kultura izvršila nemali utjecaj. Geek kultura reaktualizirala je i na sebi svojstven način obradila različita supkulturna nasljeđa, kao što su kontrakultura, hippie pokret, punk i dr.

Geek kulturu danski teoretičar Lars Konzack naziva trećom kontrakulturom, koji nakon kontrakulture 60-ih, i Yuppie kontrakulture 80-ih, treću utjecajnu kontrakulturu vidi u omladinskoj kulturi blisko vezanoj za računalnu tehnologiju. (Lars Konzack: Geek culture, The 3rd Counter-Culture; prezentirano 26.-28. lipnja 2006. na FNG2006. konferenciji u Prestonu, Engleska).

Geek kultura je opsjednuta znanjem, informacijama, a u estetskom smislu najčešće poseže za rubnim, marginalnim, bizarnim, neukusnim i radikalnim fenomenima, koji se ne uklapaju u kulturu srednja struje.

Poput svega što je u dodiru s novim tehnologijama i geek kultura je prvenstveno kulturalni narativ određen sjevernoameričkim kontekstom. Istočna Evropa, s druge strane - tu spada i hrvatska kultura sa svojim socijalističkim nasljeđem - geek kulturi daje poseban karakter svojim razočarenjem u svemoć znanja i društvenog progresa. Također, socijalistička kultura tehničkog obrazovanja, prisutna kroz zajednice tehničke kulture, razne amaterske foto i kino-klubove i druge slične institucije, daje osobit kontekst razvoju geek kulture u Hrvatskoj.

Vjerojatno najrelevantniji ogranak geek kulture su hakerska kultura i Pokret slobodnog softvera. Kultura davanja i nesebične razmjene informacija, princip otvaranja koda, te prokušani i uspješni modeli kolaborativnog rada poznati iz svijeta slobodnog softvera postali su inspiracija i model širim društvenim krugovima. Pokret nastao na temeljima slobodnog softvera naziva se Pokret slobodne kulture, a okupljen je oko inicijative alternativnog pristupa autorskim pravima (CreativeCommons).

Serijom gostujućih predavanja, prezentacija i filmskih i video projekcija, te moderiranih diskusija na temu geek kulture, projekt Razmjena vještina svojim satelitom G33KOSKOP nastavit će dvogodišnji uspješni rad na okupljanju hakerske i geek zajednice i promišljanju utjecaja i opsega (eng. scope) geek kulture danas. Kao dio već postojeće biblioteke i medijske arhive kluba mama cilj nam je oformiti poseban dio posvećen geek kulturi.

Svi događaji biti će snimani i objavljivani kao "video podcast" na http://www.razmjenavjestina.org, popularnom formatu za distribuciju multimedijalnih sadržaja.

Moderirane diskusije otvorit će teme poput: G33k&G33k, G33k&politika/aktivizam, G33k&spol/rod, G33k&umjetnost, G33k&igre, Geek&stvarnost, G33k&elitizam, G33k&queer, G33k&solidarnost. Diskusije će moderirati Klaudio Štefančić, a nastavak šire diskusije očekuje se po objavljivanju video dokumentacije u "podcast" formatu.

Filmske projekcije nastavit će prikazivati kultne igrane, dokumentarne i animirane filmove vezane za geek kulturu, a koji će u novom G33KOSKOPU biti uklopljeni u tematske cjeline nabrojane u dijelu o moderiranim diskusijama. Primjeri filmova: video dokument kultne konferencije Disinfo 2000. koja je po uzoru na konferenciju Nova Convention 1979. Williama Burroughsa govori o zavjerama, podzemlju, politici, neovisnim medijima, znanosti, fantastici, znanstvenoj fantastici, crnoj prošlosti i još crnjoj budućnosti...; "The Century of the Self" dokumentarac u četiri epizode o utjecaju psihoanalize i ideja Sigmunda Freuda, kao i njegovih rodjaka i naslijednika na kulturu, znanost, ekonomiju i politiku 20. stoljeća,; "Hippies from Hell" dokumentarac o skupini nizozemskih hakera koji su osnovali jedan od najuspješnijih svjetskih davatelja internet usluga.

Gostujuća predavanja i prezentacije poslužit će pokušaju pisanja povijesti g33k kulture u lokalnom i globalnom kontekstu, te njihovim sličnostima i razlikama.

U pokušaju da ocrtamo puteve geek kulture u Hrvatskoj u proteklih nekoliko desetljeća, morat ćemo predstaviti i suodnos tehnologije i umjetnosti, koji je u Hrvatskoj ozbiljno i međunarodno relevantno započeo s pojavom umjetničkog pokreta Nove tendencije tijekom prve polovice 60-ih i koji je svoju kulminaciju doživio tijekom 1968. pokretanjem časopisa Bit International, odnosno održavanjem međunarodnog simpozija i izložbe NT4 na temu Kompjutori i vizualna istraživanja. Ova zagrebačka događanja koincidirala su s međunarodnim događanjima, također pionirskog karaktera, od kojih svakako treba spomenuti izložbu i simpozij Cybernetic Serendipity, koju je iste godine organizirao londonski Institut za suvremenu umjetnost. Bio je to &#8211; posve u skladu s određenim načelima geek kulture &#8211; prvi pokušaj da se kompjutorska tehnologija iz znanstvenih laboratorija uvede u širu, u ovom slučaju, visoko-umjetničku kulturu i da se u proces kulturalizacije kompjutorske tehnologije uključe različiti predstavnici društvenih djelatnosti, kao što su umjetnici, sociolozi, filozofi, glazbenici i sl.

U tom smislu, jedan od najpoželjnijih sugovornika svakako je Matko Meštrović, likovni kritičar i sociolog, koji je aktivno sudjelovao u oblikovanju Novih tendencija. Njegov je interes već godinama usredotočen na društvene promjene uzrokovane ubrzanim tehnološkim razvojem, pa predstavlja gotovo idealnog sugovornika u pokušaju da se historizira suodnos znanosti, tehnologije i kulture, kako kroz figuru umjetnika-istraživača, tako i kroz suvremenu figuru geeka, odnosno hackera.

Ništa manje poželjniji sugovornik u nizu predavanja, koja želimo organizirati, nije ni Frieder Nake, njemački matematičar i umjetnik, koji je zajedno sa zagrebačkim kolegama sudjelovao u pionirskim izložbama Novih tendencija. Za razliku od Meštrovića, Nake je primarno matematičar i umjetnik, pa je za seriju predavanja o geek kulturi zanimljiv upravo iz aspekta praktičara, stvaraoca. 1963. započeo je s eksperimentima na legendranom stroju Konrada Zusea GraphomatZ64 , a 1965. zajedno s Michaellom Nollom i Georgom Neesom bio je prvi umjetnik koji upotrijebio kompjutor u izradi grafičke slike.

U kontekstu naše historizacije geek kulture Frieder Nake je osobito zanimljiv iz aspekta njegovog recentnog profesorskog rada na Odsjeku za interaktivnu kompjutorsku grafiku na Sveučilištu u Bremenu. Mislimo da je njegovo dvostruko iskustvo služenja kompjutorom, prije i poslije pojave Interneta, dakle, u njegovim najranijim počecima i u njegovoj apsolutnoj kulturalnoj dominaciji, izuzetno zanimljivo za komparativnu prezentaciju i analizu različitih tehničkih i kulturalnih modela na osnovu kojih je kroz povijest interpretiran odnos između čovjeka i stroja, odnosno kompjutora.

Matko Meštrović znastveni je savjetnik na Ekonomskom institutu u Zagrebu. Bio je profesor teorije dizajna na međufakultetskom studiju pri Arhitektonskom fakultetu Sveučilišta u Zagrebu, direktor Zavoda za kulturu Hrvatske (1987-1992), a prije toga savjetnik generalnog direktora Radio-televizije Zagreb. Početkom 1960-ih, kao likovni kritičar i teoretičar stekao je značajnu reputaciju sudjelujući u organiziranju međunarodnog pokreta Nove tendencije. Bio je član grupe Gorgona. Glavni urednik časopisa Dizajn (1968/69), član redakcije i urednik zagrebačkog časopisa BIT- International (1968/72) te član konzultativnog kolegija međunarodnog časopisa Journal of Communication, Philadelphia (1974/80).

Frieder Nake (1938) studirao je i doktorirao matematiku na Sveučilištu u Sttutgartu. 1968/69. bio je gostujući istraživač kompjuterske grafike na Sveučilištu u Torontu. 1968. sudjelovao je na izložbama posvećenim kompjutorskoj umjetnosti Cybernetic Serendipity u Londonu i NT4 u Zagrebu. Nakon 1970. njegov rad uključuje i ekonomsku, političku i teorijsku kritiku razvoja kompjutorske znanosti. Autor je više knjiga od kojih je najpoznatija Ästhetik der Informationsverarbeitung (1974), koja je na području teorije i kritike umjetnosti, među prvima omogućila pojavu odgovarajućeg diskursa o suodnosu znanosti i umjetnosti.

Nekada najprominentiji novinar i programer u kultnom časopisu Računari, a danas glavni urednik PC Pressa i suvlasnik SezamPro davatelja internet usluga Dejan Ristanović, vjerojatno je najpozvaniji oživjeti uspomene na početak širenja računalne kulture u bivšoj Jugoslaviji sredinom i krajem osamdesetih.

Regine Debatty je autor jednog od najpopularnijih i najutjecajnijih blogova o umjetnosti i tehnologiji http://www.we-make-money-not-art.com. Rođena u Belgiji, danas živi na relaciji Milano-Berlin i provodi većinu vremena na putovanjima i sudjelovanju na umjetničkim i tehnološkim festivalima i konferencijama.

Steven Johnson jedan je od najutjecajnijih ljudi cyberspacea. Osnivač je i glavni urednik online magazina Feed, završio semiotiku i engleski jezik na Columbia univerzitetu, autor niza knjiga o cyberkulturi: Interface culture, Everything Bad Is Good for You, Emergence, Mind Wide Open...

Alexander R. Galloway je profesor Odsjeka za kulturu i komunikaciju New York univerziteta. Šest je godina vodio Rhizome.org, web platformu za praćenje i promicanje novomedijske kulture i umjetnosti. Osnovao je umjetničko hakersku skupinu Radical Software Group, poznatu po radu Carnivore za koji su dobili Golden Nica 2002. na Ars Electronici. Autor knjiga Protocol: How Control Exists After Decentralization (MIT Press, 2004) i Gaming: Essays on Algorithmic Culture (University of Minnesota Press).

Richard Barbrook je najpoznatiji kritičar neoliberalne cyber društvene elite. Godinama vodio Hypermedia Research Centre Westminsterskog univerziteta. U osamdesetima bio aktivan član zajednice piratskih radija i komunalnih radio stanica. Autor eseja "The Hi-Tech Gift Economy", "Cyber-communism" i "The Regulation of Liberty". Trenutno završava knjigu Imaginary Futures.

Bruce Sterling je zajedno sa Williamom Gibsonom, Jeffom Noonom, Tomom Maddoxom, Rudyem Ruckerom, Johnom Shirleyem, Lewisom Shinerom i Pat Cadigan utemeljitelj cyberpunk pokreta. Autor niza knjiga poput Involution Ocean (1977), The Difference Engine (1990) (s Williamom Gibsonom), Distraction (1998), The Zenith Angle (2004) i druge. Pokretač projekata poput Dead Media Project ili Viridian Design Movement...

O Razmjeni vještina:

Program pod imenom "Razmjena vještina: pokaži sto umiješ!" tjedna su okupljanja u net.kulturnom klubu mama u kojima entuzijasti dijele svoja korisna iskustva, znanja i vještine. Planiranju pomaže online burza umijeća na adresi: http://www.razmjenavjestina.org gdje zainteresirani predstavljaju što žele podijeliti s drugima, a što bi voljeli da im drugi pokažu.

Program je u dvije godine uspio okupiti svojevrsnu zajednicu ljudi koji redovito razmjenjuju znanja i vještine, organiziraju predavanja, projekcije i distribuciju slobodnog softvera na CD medijima. Razmjena vještina gostovala je na Operaciji:grad, izložbi Ekonomije među nama u galeriji Nova, festivalu "Moje, tvoje, naše" u Rijeci, u klubu Rex u Beogradu, na umjetničkom festivalu "Generosity" u Grazu...

Razmjena vještina primjer je djelovanja Multimedijalnog instituta u smjeru transfera znanja i modela iz tehničke sfere u kulturnu i društvenu, dok ljudima iz tehničke sfere pokušavaju približiti širi kontekst upotrebe tehnologije i na taj način uspostavlja komunikacijski kanal između ta dva polja.

Razmjena vještina nalazi svoju inspiraciju u kulturi nesebičnog dijeljenja zajednice slobodnog softvera. Na svakom okupljanju zainteresirani su slobodni donijeti svoja računala, laptope i ručna računala da bi im na njih bio instaliran GNU/Linux operativni sustav, no vještine i znanja nisu ograničena samo na računala i tehnologiju, pitanja i odgovori su jedini limit. Za razmjenu vještina svake subote od 12:00 do 15:00 u mami su na raspolaganju multimedijalno računalo spojeno na internet, dvd i vhs uređaj, lcd projektor, te osnovni softverski i hardverski alati za različite instalacije i prepravke softvera i hardvera.

__PLANIRANO VRIJEME I MJESTO ODRŽAVANJA:

Tjedna događanja utorkom u net.kulturnom klubu mama.

__OPIS PROSTORA I TEHNIČKIH UVJETA U KOJIMA ĆE SE ODVIJATI PROGRAM:
U net.kulturnom klubu mama postoji "Duty Free" prostor 70 m^2 gdje se nalazi sva potrebna multimedijalna, tehnička i logistička podrška za održavanje filmskih i video projekcija, kao i okruglih stolova, diskusija, predavanja i/ili prezentacija.

__POPIS GLAVNIH SURADNIKA U PROGRAMU SA STRUČNIM BIOGRAFIJAMA:

Svi suradnici redoviti su razmjenjivači Razmjene vještina.

Nenad Romić (aka Marcell Mars) je jedan od osnivača Multimedijalnog Instituta - mi2 i kluba mama u Zagrebu. Inicijator nekolicine mi2 projekata kao što su GNU GPL izdavaštvo - EGOBOO.bits, TamTam platforma za online kolaboraciju, Ngode softver za financijsko vođenje
nevladinih udruga; pokrenuo Razmjenu vještina i njene satelite g33koskop i Sajam zajebane opreme. Sudjelovao u kolaborativnim umjetničkim projektima NRD Kit skupine NRD Van, gifoskop (interaktivna pokretna animacija), te tehnički podupro projekte EditThisBanner Line Kovačević i Leteći tepih Lale Raščić.

Marcell je s drugima producirao i/ili kurirao godišnje izložbe mi2-a "I'm still alive" 2001. i <re:Con> 2002, festival slobodne kulture, znanosti i tehnologije "Sloboda stvaralastvu!" 2005. i 2006, te konceptualnu izložbu System.hack() 2006. Dio je tima CreativeCommons Hrvatska.

Jedan od koordinatora Otokultivatora (2001-2003), zagovornik slobodne kulture, znanosti i tehnologije, napredni linux korisnik, zaljubljenik u tekstualna sučelja, piše o hakerskoj, g33k i medijskoj kulturi. Zajedno s Tomislavom Medakom pripremio GNU Pauk zbirke tekstova
o GNU filozofiji u širem kulturnom kontekstu. Povremeno opsesivno piše blog na http://www.picigin.net/logcells.

http://www.mi2.hr http://www.egoboobits.net http://tamtam.mi2.hr/TamTamDev/NGOdE http://www.razmjenavjestina.org http://nrd.picigin.net/ http://www.editthisbanner.net http://tamtam.mi2.hr/TamTamDev http://www.mi2.hr/alive http://re.mi2.hr http://www.otokultivator.org http://www.gnupauk.org

Klaudio Štefančić diplomirao je komparativnu književnost i povijest umjetnosti na Filozofskom fakultetu u Zagrebu 1995. Kao kustos radio je u Gradskom muzeju Sisak i u Galeriji Klovićevi dvori u Zagrebu. a trenutno je voditelj Galerije Galženica u Velikoj Gorici. Piše i objavljuje tekstove s područja kritke i teorije umjetnosti, odnosno književnosti u "Životu umjetnosti", "Konturi", "Zarezu", " II. programu Hrvatskog radija" i "Umjetnosti riječi". Autor je monografije "Montaža organizma" (Fraktura, 2005., Zaprešić) o umjetničkom radu Daniela Kovača.

Dobrica Pavlinušić istaknuti je razvijatelj i softver arhitekt slobodnog softvera kao i korporativnih IT rješenja. Dobrica je dobitnik nagrade &#8220;Otvorena informatika&#8221; za 2005. za doprinose u zagovaranju i razvoju otvorenih standarda i slobodnog softvera.

Alan Pavičić je glavni softverski arhitekt u hrvatskom AL-AST d.o.o. odjelu austrijske kompanije AVL Gmbh koja razvija softver za simulacije i proračune motora s unutrašnjim izgaranjem.

__OPIS DOSADAŠNJIH RELEVANTNIH DJELATNOSTI ORGANIZATORA PROGRAMA:

Dvije godine kontinuiranog programa Razmjena vještina (i njenih satelita G33koskop i Sajam zajebane opreme) svake subote u klubu mama, te gostovanja projekta na Operaciji : grad, izložbi Ekonomije među nama u galeriji Nova, festivalu "Moje, tvoje, naše" u Rijeci, u klubu Rex u Beogradu, na umjetničkom festivalu "Generosity" u Grazu...

Sudjelovanje organizatora kao govornika i prezentera na brojnim hakerskim i drugim tehnološkim konferencijama DORS/CLUC, Jednostavno: Linux!, Open Source 2006 u Zagrebu, Transhack meeting u Puli, What the Hack i Next5minutes u Nizozemskoj, Chaos Communication Conference, Wizards of Os i Transmediale u Berlinu, Ars Eletronica u Linzu, GPLv3 u Bostonu, iSummit u Bostonu i Rio de Janeiru....


original Jul 21 1:46pm

permalink
RiIRCKanal
Vijesti - O razmjeni - Razmjenjivači Vještina - RazmjenaDistribucija - IRC Kanal - IRC Logovi - Kontakt

IRC kanal riječkih razmjenjivača

Družite se i razmjenjujte vještine sa ostalim Riječanima na:

Srvr: irc.freenode.net
Port: 6667 - plain
Port: 6669 - ssl
Chan: #rirazmjena

Za pristup kroz Tor mrežu:
Srvr: mejokbp2brhw4omd.onion

Na kanalu je dostupan i servisni (ro)bot sa funkcijama kao; HR <> ENG riječnik, rot13 (de)koder, TV raspored, SMS2mail, Linux version tracker ...


permalink
HorZaTrazi
  • pomoć u izradi razmjenavjestina terminal-client enviromenta (pxe/nfs?/packaging/deb repo)
  • savjete i prakse za debian web/database/filesystem clustering/replikaciju
  • savjete i prakse za kvalitetnu SAMBA/LDAP/AD integraciju
  • savjete za clustering Tomcata
  • inofrmacije o svijetu oko nas (posebno zanimljivosti iz IT-a koje se ne mogu naći u novinama i na onom... kako se zove... guuuuglu)
permalink
HorZaNudi
  • osobne i/ili neprofitne web stranice
  • savjeti i pomoć oko implementacije, skripti, svašta nešto
  • poduka i podizanje gotovih GNU-GPL CMS-ova, foruma, svašta nešto
  • hardware (samo konpjuktori i komponente za isti)
  • beskonačno filozofiranje i teoretiziranje oko samoorganizacije
  • gorenavedeno primjenjeno na web community-e (forumi, razni drugi oblici community-a)
  • stvari iz gornje stavke... razni načini za automatizaciju i reorganizaciju dosadnih procesa u svemu i svačemu
permalink
RijeckiRazmjenjivaci
Vijesti - O razmjeni - Razmjenjivači Vještina - RazmjenaDistribucija - IRC Kanal - IRC Logovi - Kontakt

Razmjenjivači[ce] vještina:

AiC

PaE

ManS

B4D

v-v

SynaN

SmrtCRO

Frenki

Mali


Ovdje će te naći i razmjenjivače iz drugih gradova - RazmjenjivaciVjestina.


permalink
SiraM

*Traži...

prihvatljivu 64 bit linux distribucijo. *

Objašnjenje...

Tražim distribuciju koja će raditi na 64bitnom (AMD) procesoru
(HP nx632 AMD Mobile Turion 64 X2 TL-52 (1.6GHz) Chipset ATI Xpress 1150 1GB DDR2 667MHz Broadcom Netlink Gigabit Ethernet PCI Controller (Wireless Broadcom 802.11 abg Integriran Bluetooth modul...)

Prokušan do sada je Gentoo za koji jedino mogu reći da na žalost nemam vremena sve to kompajlirati. Na žalost.
Ubuntu...

Slijedeći pokušaj. Kako je poznato da nema najprikladnije distribucije do one s kojom ste upoznati, uzela sam openSuse 10.x. Nije išlo, puno toga je visilo, paketi se nisu htjeli intaliravat... Paketi.
Ubuntu drugi put. Pokazalo se da ni ova distribucija nema razvijeni sav software... pokazalo se i to da ja samostalno također nisam u stanju rješiti probleme za koji postoji i više nego detaljan howto na njihovom wikiju, a to je ako ne i najbitniji uvjet pri izboru distribucije - da se mogu samostalno nositi s njome.

Razmišljala sam i o ideji 64bitnih distribucija, da ih tako nazovem. Usudila bih se reći kako (trenutno) nisu uopće optimizirane za 64bitne procesore. Kao must have u svojoj ponudi... pa hajmo rekompajlirat 32-bitnu i sad službeno izlazi 64x distra... u je! Da ne spominjem software drugi put. Evo spomenula sam ga.

Ima li sada, u ovom momentu uopće smisla i tražiti distru za 64 bitni procesor?



Odgovor je Arch GNU/Linux http://archlinux.org/ distribucija koja spaja najbolje iz Linux i BSD svijeta 
tvoreci jednu od najboljih distribucija trenutno. KISS pristup i vrhunski package manager (nesto kao 
slackware sa vrhunskim pkg. managerom), uz pkg. manager tu je i ports (naziva ABS) sistem po uzoru na BSD 
ali i init koji je cak slicniji BSD sustavima nego kod slackware (citaj jednostavniji, a neusporedivo cisci i 
jednostavniji od redhat, debian i derivata - ranije spomenuti KISS pristup). No razlog zasto uopce spominjem 
sve ovo je sto je Arch od pocetka bila distribucija namjenjena iskljucivo i686 masinama, svi paketi sa 
optimizacijama za tu arhitekturu... a za gentoo ricere ako im ni to nije dosta imaju ranije spomenuti ABS 
(kao BSD ports) kojim je ekstremno lako rekompilirati i prepakirati softver (u ovom slucaju sa agresivnijim 
flagovima)... Arch danas ima i *sluzbeni* x86-64 port koji ti svakako preporucam da probas. Jos jedna specificnost 
ove distribucije je sto nema izdanja u klasicnom smislu vec ima tzv. "rolling releases" sistem gdje se paketi 
konstantno osvjezavaju u respozitorijima a na korisniku je da svako malo odradi "pacman -Syu" i ostaje u potpunom 
syncu sa posljednjim softverom/paketima (ako mu je do toga).... ima tu jos hrpa specificnosti i super stvari 
a to ces vidjeti sama kad postanes Arch korisnik.

http://wiki.archlinux.org/index.php/The_Arch_Way ;)


original Apr 7 3:33pm

permalink
AboutRijeka
Vijesti - O razmjeni - Razmjenjivači Vještina - RazmjenaDistribucija - IRC Kanal - IRC Logovi - Kontakt

O riječkoj razmjeni

Razmjena vještina neformalno je tjedno okupljanje na kojima entuzijasti dijele svoja korisna iskustva, znanja i vještine. Razmjena nalazi svoju inspiraciju u kulturi nesebičnog dijeljenja zajednice slobodnog softvera. Kako su inicijatori ovog druženja u Rijeci uglavnom dio te zajednice može se očekivati da će početne razmjene biti više tematski vezane uz slobodan softver. Što naravno ne znači da nismo zainteresirani za neka druga područja i vještine. Nadamo se okupiti riječke geekove, programere, hakere, haktiviste... koji svoje znanje žele prenjeti drugima, ali isto tako i osobe koje žele steći određene informatičke vještine koje su zapravo ključan čimbenik ovih druženja.

Program je spontan, nema nekog striktnog rasporeda ili pravila. Ljudi se okupe jednom tjedno u opuštenoj atmosferi, razgovaraju, razmjenjuju vještine... Razmjena može ići jedan na jedan, jedna osoba na više ljudi ili pak na sve. Naravno to ne znači da se ponekad nećemo dogovoriti u naprijed o čemu bi mogli razgovarati. U tome, kao i u drugim stvarima može nam pomoći online burza znanja koju smo pripremili na http://www.razmjenavjestina.org/rijeka . Stranice su otvorene svima, na njima možete predstaviti što želite podijeliti sa drugima i što bi željeli da se vama pokaže. Možemo pripremati program, ostavljati neka svoja zrnca mudrosti, pisati najave ali i sažetke prošlih razmjena. Na istom mjestu naći će te i razmjenjivače iz drugih hrvatskih gradova kao Zagreb, Split i Čakovec, zajedno sa njima stvarajmo otvorenu riznicu vještina.

Za razmjenu vještina svake srijede od 17 sati do 22 sata u Molekuli na raspolaganju je ugodna prostorija (tzv. HackLab) sa dovoljno stolica, jedno ili više desktop računala spojenih na internet, wifi ap ili switch port za vaše prijenosno računalo, LCD projektor te osnovni softverski i hardverski alati za različite instalacije i prepravke softvera i hardvera. Svaki sudionik slobodno može donjeti i svoje prijenosno (ili ručno) računalo. To je i poželjno, pogotovo na onim razmjenama gdje ćemo zajedno programirati i sl. Uz samu tehničku opremu vrlo je bitno da imamo sok i kekse kojima se možemo ponuditi, to ne možemo jamčiti no pokušat ćemo riješiti nabavku od razmjene do razmjene :)

Vidimo se na razmjeni vještina!


permalink
Weblog Navigation
Loading...
Weblog Archives
  • Loading...