Fidonet

Ik kwam net weer iets tegen mbt de historie van Fidonet.

Als Node, BBS beheerder heb ik ook nog rondom 1997 meegeholpen met de Algemene BBS lijst Nederland. En laat ik me daar nu niet weer zelf genoemd zien als medewerker ..

November 1997 ;)

abn199711

Uit een iets latere ARJ archief bestand: ABN

“Helaas is Dennis Slagers per juni 1997 gestopt met het mede samenstellen
van de ABNlijst. Hij heeft zich in de periode dat hij meewerkte aan de ABNlijst heel erg ingezet om de ABNlijst te promoten (dankzij hem is de nieuwste ABNlijst nu ook altijd op de ISDN-pagina van Gertjan Groen op internet te zien!) en te verbeteren. Voor zijn inzet willen we hem van harte bedanken!”

Tekst die ik (denk)  nooit gelezen heb omdat ik per 1-1-2000 stopte met het BBS

19 jaar webloggen ..

19 jaar webloggen.

Het is de laatste tijd erg rustig maar goed het digitale leven ziet er na zoveel jaar totaal anders uit.

19 jaar een weblog. Wat begon met het bloggen over een SMS wedstrijdje wat ik won (als je goed zoekt is zelfs die info hier nog wel te vinden .. ) En  deze blog is uiteindelijk een digitaal historisch site geworden waarin bijna alle blogs nog aanwezig zijn. (hier en daar vroegen mensen om een blog weg te halen omdat er bijv. een foto gebruikt werd wat niet mocht) ..

Grappig .. nog even en we zijn 20 jaar onderweg ..

update dd 16/10/2023 .. eigenlijk ben ik gewoon oude meuk van de weblog vergeten. Ik blog al sinds 2000 .. ;) dus 23 jaar ..

Fixing Proxmox boot ending in grub prompt with ZFS disks

I have removed my outdated how-to as it was not functional for my proxmox version (6.4x) during a planned reboot on 5-7-2021 .. I am lucky I did not had a power outtage earlier as that would have given me more troubles ..

Proxmox has made a great tool: proxmox boot tool .. and following the guidelines in the link below gave me a working proxmox in only minutes (it took some hours before I had a USB thumbdrive working (one defect one) ..

But there is a great how to here

Important parts

Repairing a System Stuck in the GRUB Rescue Shell

If you end up with a system stuck in the grub rescue> shell, the following steps should make it bootable again:

  1. Boot using a Proxmox VE version 6.4 or newer ISO
  2. Select Install Proxmox VE (Debug Mode)
  3. Exit the first debug shell by typing Ctrl + D or exit
  4. The second debug shell contains all the necessary binaries for the following steps
  5. Import the root pool (usually named rpool) with an alternative mountpoint of /mnt:
    zpool import -f -R /mnt rpool
  6. Find the partition to use for proxmox-boot-tool, following the instructions from Finding potential ESPs
  7. Bind-mount all virtual filesystems needed for running proxmox-boot-tool:
    mount -o rbind /proc /mnt/proc
    mount -o rbind /sys /mnt/sys
    mount -o rbind /dev /mnt/dev
    mount -o rbind /run /mnt/run
  8. change root into /mnt
    chroot /mnt /bin/bash
  9. Format and initialize the partitions in the chroot – see Switching to proxmox-boot-tool
  10. Exit the chroot-shell (Ctrl + D or exit) and reset the system (for example by pressing CTRL + ALT + DEL)
  11. Note: The next boot can end up in an initramfs shell, due to the hostid mismatch (from importing the pool in the installer).
    If this is the case, simply import it again with using the force -f flag:
    # zpool import -f rpool

This first part gives you the start to fix grub again

Find the partitions:

lsblk -o +FSTYPE

(in my situation) with the vfat > 512MB

format it again

# proxmox-boot-tool format /dev/sda2
# proxmox-boot-tool format /dev/sdb2

NB. I used the –forced option as there is data on the vfat

the init part

# proxmox-boot-tool init /dev/sda2
# proxmox-boot-tool init /dev/sdb2

and if needed (what was the case in my situation)

# proxmox-boot-tool clean

Reboot and I was up and running again with my node in my cluster ..

The next GUIDE to install NGINX-PROXY-MANAGER and not having bad gateway database issues

Before reading: the main reason why this nginx-proxy-manager was not running in my environment was the fact that I was running my Linux version as LXC under Proxmox and not as a VM under proxmox. After failing a 2nd time with exact the same config files what was working I noticed that I was using LXC and that the Proton VM was actually a VM, by changing it to a normal Debian VM I was able to get a working version fast again.

This is also probably the reason that Portainer was not able to start the database as well. So in the end: using a VM ..

 

I was reading:
another website that was telling me how to install nginx-proxy-manager. But I failed. I kept getting ‘ bad gateway’ and if you read the github posts about this issue you will not understand why all is failing.

So yes, I did install proton VM, a sucking virtual machine under my proxmox as it was used by the guy from that other website. As docker is available I had to start it during boot. Those guidelines were described fine. But installing my own mysql or mariadb was failing time after time. Especially as mariadb or mysql was not having a root password. So I failed. Buy why?

So in the end (lucky I had a snapshot, so that I could go back when messing some things really bad up. I restarted the machine and thought about what I read on another website: nginx-proxy-manager is ‘now’  providing a mysql instance itself. AHA .. so if that is true than I have to forget all info about previous own installed docker stuff with databases. So i removed those failures from the system.

I checked the website of nginx-proxy-manager and thought: let start over ..

In the end to make this story short

I made sure the server pointed to “host”: “127.0.0.1”, in the config.json

make sure there is a config.json
place this config.json where you use the ‘docker-compose up -d’
I did it in /home/nginx-proxy-manager/

And probably here is the catch as the default example is telling

 # Make sure this config.json file exists as per instructions above:
      - ./config.json:/app/config/production.json

the /app/config/production.json is a location where you did not put your own config.json. So this part is totally wrong. So the config.json with your database settings can never be found, so you get issues, but the ‘ make sure this config.json exisist as per instructions above ‘ gave me no clue, cause what is stated above?

So I tried what I did before in the docker-compose.yml I changed the location of my config.json to

-./config.json:/home/nging-proxy-manager/config.json

now I restarted the docker container again but I made an error the container was started with docker-compose up without the -d (DAEMON) … so I got output in my screen and suddenly I saw that there was a connection to the database but my password was not accepted.

I made sure I shut down the docker container again, removed the contents from the directories and restarted it again .. YEAAAHHH .. finally .. it was working

In short, 2 things to notice

config.json: change  the host part to: “host”: “127.0.0.1”,
in the docker-compose.yml pinpoint the config.json to the actual location on your HDD wher you put it.

Now start with ie. docker-compose up -d

have fun

Message to self: VMware root disabled on webui en shell

pam_tally2 --user root

In my example the there were 25 failed root login attempts:

1 Login Failures Latest failure From
2 root 25 01/02/20 10:56:59 unknown

The clear the the password lockout use the following command:

1 pam_tally2 --user root --reset

ALT-F1 brings you to the shell if it is enabled (it not also, but no username/pwd can be given

ALT-F2 brings you back

no space left on device VMWARE

Upgrade goes wrong

esxcli software profile update -p ESXi-6.7.0-20190802001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

gives no space left on device

with error:

[Errno 28] No space left on device
vibs = VMware_locker_tools-light_10.3.10.12406962-14141615
Please refer to the log file for more details.
[root@ezsetupsystemb05ada87ad44:~] cd /tmp
[root@ezsetupsystemb05ada87ad44:/tmp] wget http://No space left on device
wget: bad address ‘No’
[root@ezsetupsystemb05ada87ad44:/tmp] wget http://hostupdate.vmware.com/software/VUM/PRODUCTION/main/esx/vmw/vib20/tools-light/VMware_locker_tools-light_10.3.10.12406962-14141615.vib
Connecting to hostupdate.vmware.com (92.123.124.29:80)

After this again

esxcli software profile update -p ESXi-6.7.0-20190802001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

and now it is ok

AroundMyRoom: Schiphol applicatie update

Vorig jaar de stoute schoenen aangetrokken om een schiphol applicatie te maken gebaseerd op de API van Schiphol ..  Begin april 2019 hebben de ontwikkelaars de API naar versie 4 gezet en een aantal aanpassingen doorgevoerd.

Deze aanpassingen heb ik na wat geklungel van de API mensen (in een voorbeeld een dubbele punt vergeten waardoor je de data vanuit een voorbeeld niet krijgt). Toen ik dat PHP voorbeeld foutje gevonden had in de api key en api secret kreeg ik wat data en toen was het zaak om de functie aan te passen want de api key en secret moet nu in de header meegestuurd worden.

Op basis van het voorbeeld daar mee lopen te stoeien en toen kreeg ik wat data. Helaas bleek de wijze waarop de URL gemaakt moest worden ook niet meer te werken. Geef je een tijd op, dan moet er ook een datum mee. Dus hier en daar wat dingen omgezet en na een paar uurtjes doet de schiphol app het weer op de nieuwe V4 van de API en dat voor een nono zonder PHP kennis ;)