Darnell (Seize The Day)'s latest activity
- 3mo Β·
- 3m read Β·
-
PublicΒ·
-
darnell.day
Telegram's CEO's Arrest Should Be A Wake Up Call For Social Media Admins & CEOs
Hours ago, France π«π· arrested the CEO of Telegram, Pavel Durov (@durov@telegram.me) for failure to provide adequate moderation over his social messaging platform (props to @chris@threads.net for alerting me about this story).
French authorities have detained Pavel Durov, the French-Russian billionaire who founded the messaging app Telegram, at an airport outside Paris, according to CNN affiliate BFMTV. [...]
Durov, 39, was wanted under a French arrest warrant due to the lack of moderation on Telegram which led to it being used for money laundering, drug trafficking and sharing pedophilic content, according to BFMTV.
According to BFMTV, the Telegram founder had not regularly travelled to France and Europe since the arrest warrant was issued. (Cable News Network)
The move has rattled corporate leaders of other large platforms, including Elon Musk (@elonmusk@x.com), who has posted several times about the incident, often portraying Europe as a region lacking in freedom of speech.
Musk's snark aside, Pavel Durov's arrest will create a precedent among social media administrators and CEOs (Corporate Executive Officers), as it means the leaders and administrators of each respective social media network can be held accountable for failing to properly moderate their networks.
For example, Elon Musk is already facing legal challenges in Australia π¦πΊ, Brazil π§π·, & the European Union πͺπΊ (which comprises of 27 countries, plus an additional 4 in the Schengen Area) for moderation failures, & it would not be surprised if arrest warrants were issued.
Note: The United States πΊπΈ would probably intervene if any international arrest warrants were issued, mainly because Elon Musk is already embroiled in numerous court cases within America πΊπΈ (ππ€£π), & the US government would object to the opportunity of dealing out justice themselves upon an American πΊπΈ citizen who is violating American πΊπΈ laws.
So Why Should Social Media Admins In The Fediverse Be Worriedβ½
Although most Fediverse admins are probably too small to be noticed by most governments, they could still be subject to raids by authorities over offensive content, which happened to a reasonably active Mastodon instance last year.
In May, Mastodon server Kolektiva.social was compromised when one of the server's admins had their home raided by the FBI for unrelated charges._ All of their electronics,_ including a backup of the instance database, were seized. [...]
According to Kolektiva, the seized database, now in the FBI's possession, includes personal information such as email addresses, hashed passwords, and IP addresses from three days prior to the date the backup was made. It also includes posts, direct messages, and interactions involving a user on the server. Because of the nature of the fediverse, this also implicates user messages and posts from other instances.
To make matters worse, it appears that the admin targeted in the raid was in the middle of maintenance work which left would-be-encrypted material on the server available in unencrypted form at the time of seizure. (Via @eff@mastodon.social on Electronic Frontier Foundation Blog)
Like most Federated instances, Kolektiva probably relied on users to report offensive content to moderators. However, some content ended up slipping through, which resulted in a surprising raid by the Federal Bureau of Investigation (FBI).
Automated Solutions For Fediverse Admins
Fediverse Admins managing ActivityPub powered sites with over 1,000 plus monthly active users (MAU) &/or 100,000 total users should consider integrating automated tools that can detect objectionable content like Child Sexual Abuse Material (CSAM) & financial fraud.
A few automated tools for detecting CSAM that are available to admins are Safer by Thorn (more info here), & a CSAM Scanning Tool by @cloudflare@cloudflare.social, the latter which is freely available for Cloudflare customers.
Automatically detecting fraud is more complex & more probably more expensive. However, several companies are designed to automatically detect fraud at scale, which can assist admins in moderating their online content. Mateusz Pniewski has an excellent list of 13 companies that admins may want to check out, which could make it easier for Admins to detect fraud on the federated platforms.
What About A Decentralized, Less Expensive Solutionsβ½
Unless one is a Cloudflare client, implementing automated tools like this is prohibitively expensive and probably not affordable for most admins. However, there could be a way to extend automated protection to the Fediverse without contracting assistance from a corporate giant.
Even as the digital world has transformed over the years, with social media platforms like Pixelfed becoming hubs for visual sharing and communication, the challenge of spam remains as relevant as ever. Pixelfed's ingenious implementation of the Naive Bayes classifier to combat spam is a testament to the algorithm's versatility. By analyzing the captions accompanying images, Pixelfed's spam filter can swiftly determine whether a post contains genuine content or is simply trying to clutter your feed with unwanted promotions or irrelevant information. (Pixelfed Blog)
@dansup@mastodon.social has created built-in anti-spam features for Pixelfed instances, which theoretically could be tweaked to thwart more offensive content upon each instance without sacrificing privacy.
This would probably require a massive amount of time, energy & collaboration between the Fediverse Founders (as well as funds). However, if successful, it would help safeguard the Fediverse from unsavory actors attempting to ruin the internet for everyone normal.
π¨πΎβπ» by @darnell@darnellclayton.com π @darnell@darnell.day π§ darnell@darnell.day
πΊπΎ Follow my adventures upon:
π Darnell (One)
π¦ Darnell (Movies, Opuses, Entertainment)
πΈ Darnell (Out Of Office)
π¦ΉπΎββοΈ WordPress Workarounds:
π» Darnell (TeleVerse)
π Darnell (Africa)
π¨πΎβπ¨ Darnell (Creative Outlet)
π₯·πΎ Other Hideaways:
𧡠Darnell (Threads)
π Darnell (Hard News)
π¬ Darnell (Flipboard)
β¦See more
Hours ago, France π«π· arrested the CEO of Telegram, Pavel Durov (@durov@telegram.me) for failure to provide adequate moderation over his social messaging platform (props to @chris@threads.net for alerting me about this story).
French authorities have detained Pavel Durov, the French-Russian billionaire who founded the messaging app Telegram, at an airport outside Paris, according to CNN affiliate BFMTV. [...]
Durov, 39, was wanted under a French arrest warrant due to the lack of moderation on Telegram which led to it being used for money laundering, drug trafficking and sharing pedophilic content, according to BFMTV.
According to BFMTV, the Telegram founder had not regularly travelled to France and Europe since the arrest warrant was issued. (Cable News Network)
The move has rattled corporate leaders of other large platforms, including Elon Musk (@elonmusk@x.com), who has posted several times about the incident, often portraying Europe as a region lacking in freedom of speech.
Musk's snark aside, Pavel Durov's arrest will create a precedent among social media administrators and CEOs (Corporate Executive Officers), as it means the leaders and administrators of each respective social media network can be held accountable for failing to properly moderate their networks.
For example, Elon Musk is already facing legal challenges in Australia π¦πΊ, Brazil π§π·, & the European Union πͺπΊ (which comprises of 27 countries, plus an additional 4 in the Schengen Area) for moderation failures, & it would not be surprised if arrest warrants were issued.
Note: The United States πΊπΈ would probably intervene if any international arrest warrants were issued, mainly because Elon Musk is already embroiled in numerous court cases within America πΊπΈ (ππ€£π), & the US government would object to the opportunity of dealing out justice themselves upon an American πΊπΈ citizen who is violating American πΊπΈ laws.
So Why Should Social Media Admins In The Fediverse Be Worriedβ½
Although most Fediverse admins are probably too small to be noticed by most governments, they could still be subject to raids by authorities over offensive content, which happened to a reasonably active Mastodon instance last year.
In May, Mastodon server Kolektiva.social was compromised when one of the server's admins had their home raided by the FBI for unrelated charges._ All of their electronics,_ including a backup of the instance database, were seized. [...]
According to Kolektiva, the seized database, now in the FBI's possession, includes personal information such as email addresses, hashed passwords, and IP addresses from three days prior to the date the backup was made. It also includes posts, direct messages, and interactions involving a user on the server. Because of the nature of the fediverse, this also implicates user messages and posts from other instances.
To make matters worse, it appears that the admin targeted in the raid was in the middle of maintenance work which left would-be-encrypted material on the server available in unencrypted form at the time of seizure. (Via @eff@mastodon.social on Electronic Frontier Foundation Blog)
Like most Federated instances, Kolektiva probably relied on users to report offensive content to moderators. However, some content ended up slipping through, which resulted in a surprising raid by the Federal Bureau of Investigation (FBI).
Automated Solutions For Fediverse Admins
Fediverse Admins managing ActivityPub powered sites with over 1,000 plus monthly active users (MAU) &/or 100,000 total users should consider integrating automated tools that can detect objectionable content like Child Sexual Abuse Material (CSAM) & financial fraud.
A few automated tools for detecting CSAM that are available to admins are Safer by Thorn (more info here), & a CSAM Scanning Tool by @cloudflare@cloudflare.social, the latter which is freely available for Cloudflare customers.
Automatically detecting fraud is more complex & more probably more expensive. However, several companies are designed to automatically detect fraud at scale, which can assist admins in moderating their online content. Mateusz Pniewski has an excellent list of 13 companies that admins may want to check out, which could make it easier for Admins to detect fraud on the federated platforms.
What About A Decentralized, Less Expensive Solutionsβ½
Unless one is a Cloudflare client, implementing automated tools like this is prohibitively expensive and probably not affordable for most admins. However, there could be a way to extend automated protection to the Fediverse without contracting assistance from a corporate giant.
Even as the digital world has transformed over the years, with social media platforms like Pixelfed becoming hubs for visual sharing and communication, the challenge of spam remains as relevant as ever. Pixelfed's ingenious implementation of the Naive Bayes classifier to combat spam is a testament to the algorithm's versatility. By analyzing the captions accompanying images, Pixelfed's spam filter can swiftly determine whether a post contains genuine content or is simply trying to clutter your feed with unwanted promotions or irrelevant information. (Pixelfed Blog)
@dansup@mastodon.social has created built-in anti-spam features for Pixelfed instances, which theoretically could be tweaked to thwart more offensive content upon each instance without sacrificing privacy.
This would probably require a massive amount of time, energy & collaboration between the Fediverse Founders (as well as funds). However, if successful, it would help safeguard the Fediverse from unsavory actors attempting to ruin the internet for everyone normal.
π¨πΎβπ» by @darnell@darnellclayton.com π @darnell@darnell.day π§ darnell@darnell.day
πΊπΎ Follow my adventures upon:
π Darnell (One)
π¦ Darnell (Movies, Opuses, Entertainment)
πΈ Darnell (Out Of Office)
π¦ΉπΎββοΈ WordPress Workarounds:
π» Darnell (TeleVerse)
π Darnell (Africa)
π¨πΎβπ¨ Darnell (Creative Outlet)
π₯·πΎ Other Hideaways:
𧡠Darnell (Threads)
π Darnell (Hard News)
π¬ Darnell (Flipboard)
See less
Hours ago, France π«π· arrested the CEO of Telegram, Pavel Durov (@durov@telegram.me) for failure to provide adequate moderation over his social messaging platform (props to @chris@threads.net for alerting me about this story).
French authorities have detained Pavel Durov, the French-Russian billionaire who founded the messaging app Telegram, at an airport outside Paris, according to CNN affiliate BFMTV. [...]
Durov, 39, was wanted under a French arrest warrant due to the lack of moderation on Telegram which led to it being used for money laundering, drug trafficking and sharing pedophilic content, according to BFMTV.
According to BFMTV, the Telegram founder had not regularly travelled to France and Europe since the arrest warrant was issued. (Cable News Network)
The move has rattled corporate leaders of other large platforms, including Elon Musk (@elonmusk@x.com), who has posted several times about the incident, often portraying Europe as a region lacking in freedom of speech.
Musk's snark aside, Pavel Durov's arrest will create a precedent among social media administrators and CEOs (Corporate Executive Officers), as it means the leaders and administrators of each respective social media network can be held accountable for failing to properly moderate their networks.
For example, Elon Musk is already facing legal challenges in Australia π¦πΊ, Brazil π§π·, & the European Union πͺπΊ (which comprises of 27 countries, plus an additional 4 in the Schengen Area) for moderation failures, & it would not be surprised if arrest warrants were issued.
Note: The United States πΊπΈ would probably intervene if any international arrest warrants were issued, mainly because Elon Musk is already embroiled in numerous court cases within America πΊπΈ (ππ€£π), & the US government would object to the opportunity of dealing out justice themselves upon an American πΊπΈ citizen who is violating American πΊπΈ laws.
So Why Should Social Media Admins In The Fediverse Be Worriedβ½
Although most Fediverse admins are probably too small to be noticed by most governments, they could still be subject to raids by authorities over offensive content, which happened to a reasonably active Mastodon instance last year.
In May, Mastodon server Kolektiva.social was compromised when one of the server's admins had their home raided by the FBI for unrelated charges._ All of their electronics,_ including a backup of the instance database, were seized. [...]
According to Kolektiva, the seized database, now in the FBI's possession, includes personal information such as email addresses, hashed passwords, and IP addresses from three days prior to the date the backup was made. It also includes posts, direct messages, and interactions involving a user on the server. Because of the nature of the fediverse, this also implicates user messages and posts from other instances.
To make matters worse, it appears that the admin targeted in the raid was in the middle of maintenance work which left would-be-encrypted material on the server available in unencrypted form at the time of seizure. (Via @eff@mastodon.social on Electronic Frontier Foundation Blog)
Like most Federated instances, Kolektiva probably relied on users to report offensive content to moderators. However, some content ended up slipping through, which resulted in a surprising raid by the Federal Bureau of Investigation (FBI).
Automated Solutions For Fediverse Admins
Fediverse Admins managing ActivityPub powered sites with over 1,000 plus monthly active users (MAU) &/or 100,000 total users should consider integrating automated tools that can detect objectionable content like Child Sexual Abuse Material (CSAM) & financial fraud.
A few automated tools for detecting CSAM that are available to admins are Safer by Thorn (more info here), & a CSAM Scanning Tool by @cloudflare@cloudflare.social, the latter which is freely available for Cloudflare customers.
Automatically detecting fraud is more complex & more probably more expensive. However, several companies are designed to automatically detect fraud at scale, which can assist admins in moderating their online content. Mateusz Pniewski has an excellent list of 13 companies that admins may want to check out, which could make it easier for Admins to detect fraud on the federated platforms.
What About A Decentralized, Less Expensive Solutionsβ½
Unless one is a Cloudflare client, implementing automated tools like this is prohibitively expensive and probably not affordable for most admins. However, there could be a way to extend automated protection to the Fediverse without contracting assistance from a corporate giant.
Even as the digital world has transformed over the years, with social media platforms like Pixelfed becoming hubs for visual sharing and communication, the challenge of spam remains as relevant as ever. Pixelfed's ingenious implementation of the Naive Bayes classifier to combat spam is a testament to the algorithm's versatility. By analyzing the captions accompanying images, Pixelfed's spam filter can swiftly determine whether a post contains genuine content or is simply trying to clutter your feed with unwanted promotions or irrelevant information. (Pixelfed Blog)
@dansup@mastodon.social has created built-in anti-spam features for Pixelfed instances, which theoretically could be tweaked to thwart more offensive content upon each instance without sacrificing privacy.
This would probably require a massive amount of time, energy & collaboration between the Fediverse Founders (as well as funds). However, if successful, it would help safeguard the Fediverse from unsavory actors attempting to ruin the internet for everyone normal.
π¨πΎβπ» by @darnell@darnellclayton.com π @darnell@darnell.day π§ darnell@darnell.day
πΊπΎ Follow my adventures upon: π Darnell (One) π¦ Darnell (Movies, Opuses, Entertainment) πΈ Darnell (Out Of Office)
π¦ΉπΎββοΈ WordPress Workarounds: π» Darnell (TeleVerse) π Darnell (Africa) π¨πΎβπ¨ Darnell (Creative Outlet)
π₯·πΎ Other Hideaways: 𧡠Darnell (Threads) π Darnell (Hard News) π¬ Darnell (Flipboard)
Hours ago, France π«π· arrested the CEO of Telegram, Pavel Durov (@durov@telegram.me) for failure to provide adequate moderation over his social messaging platform (props to @chris@threads.net for alerting me about this story).
French authorities have detained Pavel Durov, the French-Russian billionaire who founded the messaging app Telegram, at an airport outside Paris, according to CNN affiliate BFMTV. [...]
Durov, 39, was wanted under a French arrest warrant due to the lack of moderation on Telegram which led to it being used for money laundering, drug trafficking and sharing pedophilic content, according to BFMTV.
According to BFMTV, the Telegram founder had not regularly travelled to France and Europe since the arrest warrant was issued. (Cable News Network)
The move has rattled corporate leaders of other large platforms, including Elon Musk (@elonmusk@x.com), who has posted several times about the incident, often portraying Europe as a region lacking in freedom of speech.
Musk's snark aside, Pavel Durov's arrest will create a precedent among social media administrators and CEOs (Corporate Executive Officers), as it means the leaders and administrators of each respective social media network can be held accountable for failing to properly moderate their networks.
For example, Elon Musk is already facing legal challenges in Australia π¦πΊ, Brazil π§π·, & the European Union πͺπΊ (which comprises of 27 countries, plus an additional 4 in the Schengen Area) for moderation failures, & it would not be surprised if arrest warrants were issued.
Note: The United States πΊπΈ would probably intervene if any international arrest warrants were issued, mainly because Elon Musk is already embroiled in numerous court cases within America πΊπΈ (ππ€£π), & the US government would object to the opportunity of dealing out justice themselves upon an American πΊπΈ citizen who is violating American πΊπΈ laws.
So Why Should Social Media Admins In The Fediverse Be Worriedβ½
Although most Fediverse admins are probably too small to be noticed by most governments, they could still be subject to raids by authorities over offensive content, which happened to a reasonably active Mastodon instance last year.
In May, Mastodon server Kolektiva.social was compromised when one of the server's admins had their home raided by the FBI for unrelated charges._ All of their electronics,_ including a backup of the instance database, were seized. [...]
According to Kolektiva, the seized database, now in the FBI's possession, includes personal information such as email addresses, hashed passwords, and IP addresses from three days prior to the date the backup was made. It also includes posts, direct messages, and interactions involving a user on the server. Because of the nature of the fediverse, this also implicates user messages and posts from other instances.
To make matters worse, it appears that the admin targeted in the raid was in the middle of maintenance work which left would-be-encrypted material on the server available in unencrypted form at the time of seizure. (Via @eff@mastodon.social on Electronic Frontier Foundation Blog)
Like most Federated instances, Kolektiva probably relied on users to report offensive content to moderators. However, some content ended up slipping through, which resulted in a surprising raid by the Federal Bureau of Investigation (FBI).
Automated Solutions For Fediverse Admins
Fediverse Admins managing ActivityPub powered sites with over 1,000 plus monthly active users (MAU) &/or 100,000 total users should consider integrating automated tools that can detect objectionable content like Child Sexual Abuse Material (CSAM) & financial fraud.
A few automated tools for detecting CSAM that are available to admins are Safer by Thorn (more info here), & a CSAM Scanning Tool by @cloudflare@cloudflare.social, the latter which is freely available for Cloudflare customers.
Automatically detecting fraud is more complex & more probably more expensive. However, several companies are designed to automatically detect fraud at scale, which can assist admins in moderating their online content. Mateusz Pniewski has an excellent list of 13 companies that admins may want to check out, which could make it easier for Admins to detect fraud on the federated platforms.
What About A Decentralized, Less Expensive Solutionsβ½
Unless one is a Cloudflare client, implementing automated tools like this is prohibitively expensive and probably not affordable for most admins. However, there could be a way to extend automated protection to the Fediverse without contracting assistance from a corporate giant.
Even as the digital world has transformed over the years, with social media platforms like Pixelfed becoming hubs for visual sharing and communication, the challenge of spam remains as relevant as ever. Pixelfed's ingenious implementation of the Naive Bayes classifier to combat spam is a testament to the algorithm's versatility. By analyzing the captions accompanying images, Pixelfed's spam filter can swiftly determine whether a post contains genuine content or is simply trying to clutter your feed with unwanted promotions or irrelevant information. (Pixelfed Blog)
@dansup@mastodon.social has created built-in anti-spam features for Pixelfed instances, which theoretically could be tweaked to thwart more offensive content upon each instance without sacrificing privacy.
This would probably require a massive amount of time, energy & collaboration between the Fediverse Founders (as well as funds). However, if successful, it would help safeguard the Fediverse from unsavory actors attempting to ruin the internet for everyone normal.
π¨πΎβπ» by @darnell@darnellclayton.com π @darnell@darnell.day π§ darnell@darnell.day
πΊπΎ Follow my adventures upon: π Darnell (One) π¦ Darnell (Movies, Opuses, Entertainment) πΈ Darnell (Out Of Office)
π¦ΉπΎββοΈ WordPress Workarounds: π» Darnell (TeleVerse) π Darnell (Africa) π¨πΎβπ¨ Darnell (Creative Outlet)
π₯·πΎ Other Hideaways: 𧡠Darnell (Threads) π Darnell (Hard News) π¬ Darnell (Flipboard)
- 6mo Β·
- 1m read Β·
-
PublicΒ·
-
darnell.day
Threads Muting Pixelfed And Other Fediverse Instances Over One Simple Rule
I remember @dansup@mastodon.social mentioning this last month (original post was deleted), but Threads by Instagram continues to mute the entire Pixelfed community as well as other Fediverse instances (also known as servers) over one simple rule.
Server guidelines
A server may be added to our server blocklist if it doesn't comply with our guidelines for communicating with Threads. [...]
We'll also block a server if it doesn't have a:
- Sufficient privacy policy; or
- Publicly available policy to restrict access to users under the age of 13; or
- Publicly accessible feed (Emphasis mine, via Instagram Help Center)
For those unaware, a publicly accessible feed is merely a stream on a local instance where anyone can view all of the public chatter on the server. Mastodon sums it best with this description:
To allow you to discover potentially interesting content, Mastodon provides a way to browse all public posts. Well, there is no global shared state between all servers, so there is no way to browse all public posts. When you browse the federated timeline, you see all public posts that the server you are on knows about. There are various ways your server may discover posts, but the bulk of them will be from people that other users on your server follow.
There is a way to filter the federated timeline to view only public posts created on your server: The local timeline. Mind that βlocalβ here refers to the server, not to a geographical location. (Mastodon Documentation)
Pixelfed has a local timeline, but it is not publicly accessible to Threads, which explains why the site is muting the entire Pixelfed ecosystem and any other Fediverse instances that decline to allow Threads to access the public timeline without permission.
The Pixelfed creator Dan may resolve this issue by allowing instance admins to create a publicly accessible feed to allow their respective users to communicate with the greater Threads community.
However, in hindsight, it might be wise to leave it off by default, especially with numerous tech companies scouring the web to train the synthetic brains of artificial intelligence programs (Meta included, props to @r_alb@mastodon.social for alerting me about this).
π¨πΎβπ» by @darnell@darnellclayton.com π @darnell@darnell.day
πΊπΎ Follow my adventures upon:
π Darnell (One)
π¦ Darnell (Movies, Opuses, Entertainment)
πΈ Darnell (Out Of Office)
π¦ΉπΎββοΈ WordPress Workarounds:
π» Darnell (TeleVerse)
π Darnell (Africa)
π¨πΎβπ¨ Darnell (Creative Outlet)
π₯·πΎ Other Hideaways:
𧡠Darnell (Threads)
π Darnell (Hard News)
π¬ Darnell (Flipboard)
β¦See more
I remember @dansup@mastodon.social mentioning this last month (original post was deleted), but Threads by Instagram continues to mute the entire Pixelfed community as well as other Fediverse instances (also known as servers) over one simple rule.
Server guidelines
A server may be added to our server blocklist if it doesn't comply with our guidelines for communicating with Threads. [...]
We'll also block a server if it doesn't have a:
- Sufficient privacy policy; or
- Publicly available policy to restrict access to users under the age of 13; or
- Publicly accessible feed (Emphasis mine, via Instagram Help Center)
For those unaware, a publicly accessible feed is merely a stream on a local instance where anyone can view all of the public chatter on the server. Mastodon sums it best with this description:
To allow you to discover potentially interesting content, Mastodon provides a way to browse all public posts. Well, there is no global shared state between all servers, so there is no way to browse all public posts. When you browse the federated timeline, you see all public posts that the server you are on knows about. There are various ways your server may discover posts, but the bulk of them will be from people that other users on your server follow.
There is a way to filter the federated timeline to view only public posts created on your server: The local timeline. Mind that βlocalβ here refers to the server, not to a geographical location. (Mastodon Documentation)
Pixelfed has a local timeline, but it is not publicly accessible to Threads, which explains why the site is muting the entire Pixelfed ecosystem and any other Fediverse instances that decline to allow Threads to access the public timeline without permission.
The Pixelfed creator Dan may resolve this issue by allowing instance admins to create a publicly accessible feed to allow their respective users to communicate with the greater Threads community.
However, in hindsight, it might be wise to leave it off by default, especially with numerous tech companies scouring the web to train the synthetic brains of artificial intelligence programs (Meta included, props to @r_alb@mastodon.social for alerting me about this).
π¨πΎβπ» by @darnell@darnellclayton.com π @darnell@darnell.day
πΊπΎ Follow my adventures upon:
π Darnell (One)
π¦ Darnell (Movies, Opuses, Entertainment)
πΈ Darnell (Out Of Office)
π¦ΉπΎββοΈ WordPress Workarounds:
π» Darnell (TeleVerse)
π Darnell (Africa)
π¨πΎβπ¨ Darnell (Creative Outlet)
π₯·πΎ Other Hideaways:
𧡠Darnell (Threads)
π Darnell (Hard News)
π¬ Darnell (Flipboard)
See less
I remember @dansup@mastodon.social mentioning this last month (original post was deleted), but Threads by Instagram continues to mute the entire Pixelfed community as well as other Fediverse instances (also known as servers) over one simple rule.
Server guidelines
A server may be added to our server blocklist if it doesn't comply with our guidelines for communicating with Threads. [...]
We'll also block a server if it doesn't have a:
- Sufficient privacy policy; or
- Publicly available policy to restrict access to users under the age of 13; or
- Publicly accessible feed (Emphasis mine, via Instagram Help Center)
For those unaware, a publicly accessible feed is merely a stream on a local instance where anyone can view all of the public chatter on the server. Mastodon sums it best with this description:
To allow you to discover potentially interesting content, Mastodon provides a way to browse all public posts. Well, there is no global shared state between all servers, so there is no way to browse all public posts. When you browse the federated timeline, you see all public posts that the server you are on knows about. There are various ways your server may discover posts, but the bulk of them will be from people that other users on your server follow.
There is a way to filter the federated timeline to view only public posts created on your server: The local timeline. Mind that βlocalβ here refers to the server, not to a geographical location. (Mastodon Documentation)
Pixelfed has a local timeline, but it is not publicly accessible to Threads, which explains why the site is muting the entire Pixelfed ecosystem and any other Fediverse instances that decline to allow Threads to access the public timeline without permission.
The Pixelfed creator Dan may resolve this issue by allowing instance admins to create a publicly accessible feed to allow their respective users to communicate with the greater Threads community.
However, in hindsight, it might be wise to leave it off by default, especially with numerous tech companies scouring the web to train the synthetic brains of artificial intelligence programs (Meta included, props to @r_alb@mastodon.social for alerting me about this).
π¨πΎβπ» by @darnell@darnellclayton.com π @darnell@darnell.day
πΊπΎ Follow my adventures upon: π Darnell (One) π¦ Darnell (Movies, Opuses, Entertainment) πΈ Darnell (Out Of Office)
π¦ΉπΎββοΈ WordPress Workarounds: π» Darnell (TeleVerse) π Darnell (Africa) π¨πΎβπ¨ Darnell (Creative Outlet)
π₯·πΎ Other Hideaways: 𧡠Darnell (Threads) π Darnell (Hard News) π¬ Darnell (Flipboard)
I remember @dansup@mastodon.social mentioning this last month (original post was deleted), but Threads by Instagram continues to mute the entire Pixelfed community as well as other Fediverse instances (also known as servers) over one simple rule.
Server guidelines
A server may be added to our server blocklist if it doesn't comply with our guidelines for communicating with Threads. [...]
We'll also block a server if it doesn't have a:
- Sufficient privacy policy; or
- Publicly available policy to restrict access to users under the age of 13; or
- Publicly accessible feed (Emphasis mine, via Instagram Help Center)
For those unaware, a publicly accessible feed is merely a stream on a local instance where anyone can view all of the public chatter on the server. Mastodon sums it best with this description:
To allow you to discover potentially interesting content, Mastodon provides a way to browse all public posts. Well, there is no global shared state between all servers, so there is no way to browse all public posts. When you browse the federated timeline, you see all public posts that the server you are on knows about. There are various ways your server may discover posts, but the bulk of them will be from people that other users on your server follow.
There is a way to filter the federated timeline to view only public posts created on your server: The local timeline. Mind that βlocalβ here refers to the server, not to a geographical location. (Mastodon Documentation)
Pixelfed has a local timeline, but it is not publicly accessible to Threads, which explains why the site is muting the entire Pixelfed ecosystem and any other Fediverse instances that decline to allow Threads to access the public timeline without permission.
The Pixelfed creator Dan may resolve this issue by allowing instance admins to create a publicly accessible feed to allow their respective users to communicate with the greater Threads community.
However, in hindsight, it might be wise to leave it off by default, especially with numerous tech companies scouring the web to train the synthetic brains of artificial intelligence programs (Meta included, props to @r_alb@mastodon.social for alerting me about this).
π¨πΎβπ» by @darnell@darnellclayton.com π @darnell@darnell.day
πΊπΎ Follow my adventures upon: π Darnell (One) π¦ Darnell (Movies, Opuses, Entertainment) πΈ Darnell (Out Of Office)
π¦ΉπΎββοΈ WordPress Workarounds: π» Darnell (TeleVerse) π Darnell (Africa) π¨πΎβπ¨ Darnell (Creative Outlet)
π₯·πΎ Other Hideaways: 𧡠Darnell (Threads) π Darnell (Hard News) π¬ Darnell (Flipboard)