LinkedIn Is Training AI on User Data Before Updating Its Terms of Service
An anonymous reader shares a report: LinkedIn is using its users’ data for improving the social network’s generative AI products, but has not yet updated its terms of service to reflect this data processing, according to posts from various LinkedIn users and a statement from the company to 404 Media. Instead, the company says it … ⌘ Read more
@quark@ferengi.one They’re all RFC3339, unless I’m mistaken: https://ijmacd.github.io/rfc3339-iso8601/ So they’re all correct.
I have noticed that twtxt timestamps differ. For example:
- @prologic@twtxt.net (and I assume any Yarn user)
2024-09-18T13:16:17Z
- @lyse@lyse.isobeef.org
2024-09-17T21:15:00+02:00
- @aelaraji@aelaraji.com (and @movq@www.uninformativ.de, and me)
2024-09-18T05:43:13+00:00
So, which is right, or best?
I came across this Gallery Theme for Hugo, and @lyse@lyse.isobeef.org immediately came to mind. I think it would be a very fitting theme to use for all your photos, Lyse!
@prologic@twtxt.net So the feed would contain two twts, right?
2024-09-18T23:08:00+10:00 Hllo World
2024-09-18T23:10:43+10:00 (edit:#229d24612a2) Hello World
Finally @lyse@lyse.isobeef.org ’s idea of updating metadata changes in a feed “inline” where the change happened (with respect to other Twts in whatever order the file is written in) is used to drive things like “Oh this feed now has a new URI, let’s use that from now on as the feed’s identity for the purposes of computing Twt hashes”. This could extend to # nick =
as preferential indicators to clients as well as even other updates such as # description =
– Not just # url =
Likewise we could also support delete:229d24612a2
, which would indicate to clients that fetch the feed to delete any cached Twt matching the hash 229d24612a2
if the author wishes to “unpublish” that Twt permanently, rather than just deleting the line from the feed (which does nothing for clients really).
An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, let’s say I produced the following Twt:
2024-09-18T23:08:00+10:00 Hllo World
And my feed’s URI is https://example.com/twtxt.txt
. The hash for this Twt is therefore 229d24612a2
:
$ echo -n "https://example.com/twtxt.txt\n2024-09-18T23:08:00+10:00\nHllo World" | sha1sum | head -c 11
229d24612a2
You wish to correct your mistake, so you make an amendment to that Twt like so:
2024-09-18T23:10:43+10:00 (edit:#229d24612a2) Hello World
Which would then have a new Twt hash value of 026d77e03fa
:
$ echo -n "https://example.com/twtxt.txt\n2024-09-18T23:10:43+10:00\nHello World" | sha1sum | head -c 11
026d77e03fa
Clients would then take this edit:#229d24612a2
to mean, this Twt is an edit of 229d24612a2
and should be replaced in the client’s cache, or indicated as such to the user that this is the intended content.
@bender@twtxt.net Just replace the echo
with something like pbpaste
or similar. You’d just need to shell escape things like "
and such. That’s all. Alternatives you can shove the 3 lines into a small file and cat file.txt | ...
With a SHA1 encoding the probability of a hash collision becomes, at various k (number of twts):
>>> import math
>>>
>>> def collision_probability(k, bits):
... n = 2 ** bits # Total unique hash values based on the number of bits
... probability = 1 - math.exp(- (k ** 2) / (2 * n))
... return probability * 100 # Return as percentage
...
>>> # Example usage:
>>> k_values = [100000, 1000000, 10000000]
>>> bits = 44 # Number of bits for the hash
>>>
>>> for k in k_values:
... print(f"Probability of collision for {k} hashes with {bits} bits: {collision_probability(k, bits):.4f}%")
...
Probability of collision for 100000 hashes with 44 bits: 0.0284%
Probability of collision for 1000000 hashes with 44 bits: 2.8022%
Probability of collision for 10000000 hashes with 44 bits: 94.1701%
>>> bits = 48
>>> for k in k_values:
... print(f"Probability of collision for {k} hashes with {bits} bits: {collision_probability(k, bits):.4f}%")
...
Probability of collision for 100000 hashes with 48 bits: 0.0018%
Probability of collision for 1000000 hashes with 48 bits: 0.1775%
Probability of collision for 10000000 hashes with 48 bits: 16.2753%
>>> bits = 52
>>> for k in k_values:
... print(f"Probability of collision for {k} hashes with {bits} bits: {collision_probability(k, bits):.4f}%")
...
Probability of collision for 100000 hashes with 52 bits: 0.0001%
Probability of collision for 1000000 hashes with 52 bits: 0.0111%
Probability of collision for 10000000 hashes with 52 bits: 1.1041%
>>>
If we adopted this scheme, we could have to increase the no. of characters (first N) from 11
to 12
and finally 13
as we approach globally larger enough Twts across the space. I think at least full crawl/scrape it was around ~500k (maybe)? https://search.twtxt.net/ says only ~99k
@prologic@twtxt.net how would that line look like if the twtxt itself had "
, and other “spurious” characters in it?
I just ran dwarf fortress on #openbsd O:)
@quark@ferengi.one My money is on a SHA1SUM hash encoding to keep things much simpler:
$ echo -n "https://twtxt.net/user/prologic/twtxt.txt\n2020-07-18T12:39:52Z\nHello World! 😊" | sha1sum | head -c 11
87fd9b0ae4e
I think it was a mistake to take the last n base32 encoded characters of the blake2b 256bit encoded hash value. It should have been the first n. where n is >= 7
Taking the last n characters of a base32 encoded hash instead of the first n can be problematic for several reasons:
Hash Structure: Hashes are typically designed so that their outputs have specific statistical properties. The first few characters often have more entropy or variability, meaning they are less likely to have patterns. The last characters may not maintain this randomness, especially if the encoding method has a tendency to produce less varied endings.
Collision Resistance: When using hashes, the goal is to minimize the risk of collisions (different inputs producing the same output). By using the first few characters, you leverage the full distribution of the hash. The last characters may not distribute in the same way, potentially increasing the likelihood of collisions.
Encoding Characteristics: Base32 encoding has a specific structure and padding that might influence the last characters more than the first. If the data being hashed is similar, the last characters may be more similar across different hashes.
Use Cases: In many applications (like generating unique identifiers), the beginning of the hash is often the most informative and varied. Relying on the end might reduce the uniqueness of generated identifiers, especially if a prefix has a specific context or meaning.
In summary, using the first n characters generally preserves the intended randomness and collision resistance of the hash, making it a safer choice in most cases.
@quark@ferengi.one Bloody good question 🙋 God only knows 🤣
@prologic@twtxt.net this works perfectly. Thanks!
@movq@www.uninformativ.de Haha 😝
What I was referring to in the OP: Sometimes I check the workphone simply out of curiosity. 😂
@movq@www.uninformativ.de Fair 👌
Current Twt Hash spec and probability of hash collision:
The probability of a Twt Hash collision depends on the size of the hash and the number of possible values it can take. For the Twt Hash, which uses a Blake2b 256-bit hash, Base32 encoding, and takes the last 7 characters, the space of possible hash values is significantly reduced.
Breakdown:- Base32 encoding: Each character in the Base32 encoding represents 5 bits of information (since ( 2^5 = 32 )).
- 7 characters: With 7 characters, the total number of possible hashes is:
[ 32^7 = 3,518,437,208 ]
This gives about 3.5 billion possible hash values.
The probability of a hash collision depends on the number of hashes generated and can be estimated using the Birthday Paradox. The paradox tells us that collisions are more likely than expected when hashing a large number of items.
The approximate formula for the probability of at least one collision after generating n
hashes is:
[
P(\text{collision}) \approx 1 - e^{-\frac{n^2}{2M}}
]
Where:
- (n) is the number of generated Twt Hashes.
- (M = 32^7 = 3,518,437,208) is the total number of possible hash values.
For practical purposes, here are some example probabilities for different numbers of hashes (n
):
- For 1,000 hashes:
[ P(\text{collision}) \approx 1 - e^{-\frac{1000^2}{2 \cdot 3,518,437,208}} \approx 0.00014 \, \text{(0.014%)}
]
- For 10,000 hashes:
[ P(\text{collision}) \approx 1 - e^{-\frac{10000^2}{2 \cdot 3,518,437,208}} \approx 0.14 \, \text{(14%)}
]
- For 100,000 hashes:
[ P(\text{collision}) \approx 1 - e^{-\frac{100000^2}{2 \cdot 3,518,437,208}} \approx 0.999 \, \text{(99.9%)}
]
- For small to moderate numbers of hashes (up to around 1,000–10,000), the collision probability is quite low.
- However, as the number of Twts grows (above 100,000), the likelihood of a collision increases significantly due to the relatively small hash space (3.5 billion).
Current Twt Hash spec and probability of hash collision:
The probability of a Twt Hash collision depends on the size of the hash and the number of possible values it can take. For the Twt Hash, which uses a Blake2b 256-bit hash, Base32 encoding, and takes the last 7 characters, the space of possible hash values is significantly reduced.
Breakdown:- Base32 encoding: Each character in the Base32 encoding represents 5 bits of information (since ( 2^5 = 32 )).
- 7 characters: With 7 characters, the total number of possible hashes is:
[ 32^7 = 3,518,437,208 ]
This gives about 3.5 billion possible hash values.
The probability of a hash collision depends on the number of hashes generated and can be estimated using the Birthday Paradox. The paradox tells us that collisions are more likely than expected when hashing a large number of items.
The approximate formula for the probability of at least one collision after generating n
hashes is:
[
P(\text{collision}) \approx 1 - e^{-\frac{n^2}{2M}}
]
Where:
- (n) is the number of generated Twt Hashes.
- (M = 32^7 = 3,518,437,208) is the total number of possible hash values.
For practical purposes, here are some example probabilities for different numbers of hashes (n
):
- For 1,000 hashes:
[ P(\text{collision}) \approx 1 - e^{-\frac{1000^2}{2 \cdot 3,518,437,208}} \approx 0.00014 \, \text{(0.014%)}
]
- For 10,000 hashes:
[ P(\text{collision}) \approx 1 - e^{-\frac{10000^2}{2 \cdot 3,518,437,208}} \approx 0.14 \, \text{(14%)}
]
- For 100,000 hashes:
[ P(\text{collision}) \approx 1 - e^{-\frac{100000^2}{2 \cdot 3,518,437,208}} \approx 0.999 \, \text{(99.9%)}
]
- For small to moderate numbers of hashes (up to around 1,000–10,000), the collision probability is quite low.
- However, as the number of Twts grows (above 100,000), the likelihood of a collision increases significantly due to the relatively small hash space (3.5 billion).
@quark@ferengi.one Add here:
* a0826a65 - Add debug sub-command to yarnc (7 weeks ago) <James Mills>
I’d recommend a git pull && make build
$ echo -n "https://twtxt.net/user/prologic/twtxt.txt\n2020-07-18T12:39:52Z\nHello World! 😊" | sha1sum | head -c 11
87fd9b0ae4e
@prologic@twtxt.net I don’t get paid for “standing by” and “waiting for a call”, that’s right. But I’m fine with that, because I don’t have to be available, either. 😅 If someone were to call me (or send me a text message), I wouldn’t be obliged to help them out. If I have the time and energy, I will do it, though. And that extra time will be paid.
It works for us because there are enough people around and there’s a good chance that someone will be able to help.
Really, I am glad that we have this model. The alternative would be actual on-call duty, like, this week you’re the poor bastard who is legally required to fix shit. That’s just horrible, I don’t want that. 😅
What I was referring to in the OP: Sometimes I check the workphone simply out of curiosity. 😂
$ echo -n "https://twtxt.net/user/prologic/twtxt.txt\n2020-07-18T12:39:52Z\nHello World! 😊" | sha256sum | base32 | tr -d '=' | tr 'A-Z' 'a-z' | tail -c 12
tdqmjaeawqu
@prologic@twtxt.net text/plain without an explicit charset is still just US-ASCII:
The default character set, which must be assumed in the absence of a charset parameter, is US-ASCII.
https://www.rfc-editor.org/rfc/rfc2046.html#section-4.1.2
https://www.rfc-editor.org/rfc/rfc6657#section-4
Just experimenting…
$ echo -n "https://twtxt.net/user/prologic/twtxt.txt\n2020-07-18T12:39:52Z\nHello World! 😊" | sha256sum | base64 | tr -d '=' | tail -c 12
NWY4MSAgLQo
It would appear that the blake2b
256bit digest algorithm is no longer supported by the openssl
tool, however blake2s256
is; I’m not sure why 🤔
$ echo -n "https://twtxt.net/user/prologic/twtxt.txt\n2020-07-18T12:39:52Z\nHello World! 😊" | openssl dgst -blake2s256 -binary | base32 | tr -d '=' | tr 'A-Z' 'a-z' | tail -c 7
zq4fgq
Obviously produce the wrong hash, which should be o6dsrga
as indicated by the yarnc hash
utility:
$ yarnc hash -u https://twtxt.net/user/prologic/twtxt.txt -t 2020-07-18T12:39:52Z "Hello World! 😊"
o6dsrga
But at least the shell pipeline is “correct”.
FWIW the standard UNIX tools for Blake2b are openssl
and b2sum
– Just trying to figure out how to make a shell pipeline again (if you really want that); as tools keep changing god damnit 🤣
@quark@ferengi.one Do you mean something like this?
$ ./yarnc debug ~/Public/twtxt.txt | tail -n 1
kp4zitq 2024-09-08T02:08:45Z (#wsdbfna) @<aelaraji https://aelaraji.com/twtxt.txt> My work has this thing called "compressed work", where you can **buy** extra time off (_as much as 4 additional weeks_) per year. It comes out of your pay though, so it's not exactly a 4-day work week but it could be useful, just haven't tired it yet as I'm not entirely sure how it'll affect my net pay
@quark@ferengi.one Watch TV at the moment so unsure about “out of the box tools” but there are 3 implementations on the spec page https://dev.twtxt.net/doc/twthashextension.html
Could someone knowledgable reply with the steps a grandpa will take to calculate the hash of a twtxt from the CLI, using out-of-the-box tools? I swear I read about it somewhere, but can’t find it.
@aelaraji@aelaraji.com Sometimes that’s the most valuable though! Feedback!
@prologic@twtxt.net I’m glad to! it just kinda feel a bit off when it’s all I can do 😅
@quark@ferengi.one Mine is a little overkill 😂 but I need to do something for practice:
#!/bin/bash
set -e
trap 'echo "!! Something went wrong...!!"' ERR
#============= Variables ==========#
# Source files
LOCAL_DIR=$HOME/twtxt
TWTXT=$LOCAL_DIR/twtxt.txt
HTML=$LOCAL_DIR/log.html
TEMPLATE=$LOCAL_DIR/template.tmpl
# Destination
REMOTE_HOST=remotHostName # Host already setup in ~/.ssh/config
WEB_DIR="path/to/html/content"
GOPHER_DIR="path/to/phlog/content"
GEMINI_DIR="path/to/gemini-capsule/content"
DIST_DIRS=("$WEB_DIR" "$GOPHER_DIR" "$GEMINI_DIR")
#============ Functions ===========#
# Building log.html:
build_page() {
twtxt2html -T $TEMPLATE $TWTXT > $HTML
}
# Bulk Copy files to their destinations:
copy_files() {
for DIR in "${DIST_DIRS[@]}"; do
# Copy both `txt` and `html` files to the Web server and only `txt`
# to gemini and gopher server content folders
if [ "$DIR" == "$WEB_DIR" ]; then
scp -C "$TWTXT" "$HTML" "$REMOTE_HOST:$DIR/"
else
scp -C "$TWTXT" "$REMOTE_HOST:$DIR/"
fi
done
}
#========== Call to functions ===========$
build_page && copy_files
I’ve been using Codeium too the last week or so ! It’s pretty good and like @xuu@txt.sour.is said is a pretty desent Junior assistant, it helps me write good docs and the tab completion is amazing!
It of course completely sucks at doing anything “intelligent” or complex, but if you just use it as a fancier auto complete it’s actually half way decent 👌
@quark@ferengi.one I admit I find the general “click here to share blah” generally wasteful, useless and unengaging really. Not just Wordle.
I admittedly however, I’ve been guilty of doing this sometimes myself. 🤦♂️ sometimes though I think it’s OK to show your achievements. 👌
This might be quite unpopular, but I truly dislike Wordle. The reason isn’t rooted on any psychological issue, it is much, much more simple: people share their Wordle result(s)—I figure they feel good about themselves—and for me it is only uneven, unaligned, wasteful noise. I don’t even want to show you an example, but I am sure you know what I am talking about.
Thank gods those posting their hideous squares have finally quieted down. LOL.
@quark@ferengi.one LOL 😂 No worries! 😉
@quark@ferengi.one All fixed! 🥳 pleasure doing business with y’all. 🤣
Thank you for adding the feature so fast, @prologic@twtxt.net! Look at how beautiful this one renders now. Oh my!
@lyse@lyse.isobeef.org Just as an aside, shouldn’t you assume utf-8
anyway these days if not specified? 🤔 I mean basically everything almost always uses utf-8
encoding right? 😅
jenny
nor yarnd
support it very well. Only at a very basic level.
@quark@ferengi.one Okay fair 👌
@movq@www.uninformativ.de So basically on-call, but you don’t get paid for it? 🤔
@quark@ferengi.one aye aye captain 🤣
main.go
(but it can be done on a template now, so no reason to touch the code):
@quark@ferengi.one It’s a good thin yarnd
makes this user configurable via a preference 🤣 And displays both 😅
@quark@ferengi.one LOL 😂 Thanks!
yarnd
(at least) doesn't support creating such a custom TwtSubject, but it will reply and respect and thread one if one was constructed.
@quark@ferengi.one You are right, whilst it technically works, its not well supported. Too much of the code would have to change to support that, and it’s not worth the value.
So yeah no, whilst it technically works, neither jenny
nor yarnd
support it very well. Only at a very basic level.