Why does ScubaBoard need Google Trackers?

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

What gives? Why does ScubaBoard need to use a Google tracker?
It is hard to participate in a global forum while maintaining true anonymity. I also have some interest in this question.

In the past incarnation of this fine board, I could easily find where and how many indexing robots were currently raiding this site for archival. Over time I've come to see this as a good thing. Not the same thing, but related. Can a net jockey pick these two issues apart?

How to check the current stats now???

I know, I know. The results of the uncountable web crawlers do not require my observation. But I still like to watch...
 
  • Like
Reactions: OTF
You have a choice
True.

I could stop using this site and cancel my red membership because the site is borderline unusable.

Of course ScubaBoard has a choice too. They could stop letting Google track its members and not crash their site because some users use safe browsers.

I know I'm not the only one having this issue, just one of the few who have posted about it, how much traffic will continue to be lost because the site crashes for not letting Google track you?
 
I don't like posting off-topic in another member's thread.

Can someone in the know detail the differences between trackers and indexers or split me off into another thread?

Edit: Best I can do quoting from techunwrapped.com:

"There is no doubt that web crawlers [Indexers] are of great value to those responsible for web pages. At the end of the day, when someone decides to create a website, they will have the goal of receiving visits, having an audience and reaching as many users as possible.

Thanks to these trackers, that web page will be available to users who reach it through search engines. Otherwise it would be like having a store in a basement without a door and without a sign, and expecting customers to arrive."

Edit, Edit:

From Google Support:


"What is web crawling and indexing?

Crawling is a process which is done by search engine bots to discover publicly available web pages. Indexing means when search engine bots crawl the web pages and saves a copy of all information on index servers and search engines show the relevant results on search engine when a user performs a search query. Jul 30, 2020"
 
I don't like posting off-topic in another member's thread.

Can someone in the know detail the differences between trackers and indexers or split me off into another thread?

Crawlers aka indexers aka robots: google (et al.) don't go out loking for you search term(s) in the Internet, they have a pre-built lookup database that exists somewhere on google servers. Crawlers crawl over the entire 'Net in the background and build/update said database.

Then there's "cookies" stored in the browser (i.e. on your computer) that keep some small amount of information. It could be e.g. an authentication token: if you "login with Facebook", it's the token issued by FB that the browser passes on to SB to let the latter know that the user's FB login is legit.

It could also store the URL of the last page you viewed before coming "here". These let the server (that can access them: there's devil in the details here too) track your path through the website -- or websites. Hence "trackers".

I actually allow all google analytics cookies (using GNU Privacy Badger) because google analytics is a useful tool for us "it" people. It tells you have many visitors you have (capacity planning), most popular pages (where to put more ads if you depend on ad revenue), etc. The path through the site is useful for optimizing layout, providing shortcuts and so on.

Then there's others, Facebook being the Worst Guy du jour, that want to know where else you went and what you saw there, so they can use "AI" to feed you more of the same reinforcing your biases and all that bad stuff.

Now that GDPR is out and its relationship to cookies is questionable, they're back to "pixels" idea: you put a single-pixel image on every page on your site, where nobody will notice. The image is served from a tracking server that gets all the tracking info it wants when your browser fetches the image. No cookies involved, no tracker-blockers interfering. (But if you're one of "the it" people, you can make a pi-hole for that. Ads too.)
 
PS. there is another "tracker" (which is how "pixels" work): when your browser hits a URL, it sends the so-called "referrer header" as part of the request. That contains the address of the webpage that "referred" it to that URL. In firefox this header can be turned off entirely via "about:config".

This can be used to track your path through the site, too, but cookies are easier for a couple of reasons.
 
OK, low res is about 75 ppi and Hi res starts around 300 ppi. So a single pixel can be used for tracking???

If it's an image, it can be served from an entirely different server. Just like when you embed an image in your post from an external link. The only difference is, nobody will ever know it's there (make it transparent for good measure), unless of course they "view source" and stare at the matrix directly. See my other post for "referrer header".
 

Back
Top Bottom