wayback machine google plus

We use cookies on our websites for a number of purposes, including analytics and performance, functionality and advertising. There are also specific tools for querying and interacting with the Wayback Machine repository. For Linux, MacOS / OSX, BSD, and other Unix-like operating systems (including Android with Termux, or Windows, with a Unix/Linux environment), the following script (I've saved this as archive-url) will archive the requested URL: Save that to your execution path (I've chosen ~/bin, you might use /usr/local/bin or another location on your $PATH, and invoke as, say (again referring to the G+MM homepage): If you have a list of URLs in a file (or pipelined from command output), you can request all of them to be archived in a simple bash loop. The Internet Archive is fueled by donations, which provide servers, disk, and bandwidth to receive and share content. Image and video content may not be preserved at full resolution. The tracker shows the status only of the current batch. From past experience, the Archive Team can suck in amazing amounts of data quickly, and general success is likely. Full post comments may not be archived. 2009 july 4th google years Proxies can return bad data. Current Warrior images are available online: https://www.archiveteam.org/index.php?title=Warrior. I'm trying to search an account name but it doesn't show up on wayback anything does that mean it doesn't exist? Note that such requests trigger an archive by the Internet Archive from one of its archiving nodes, you're not sending the page to the Archive yourself. No Tor. It runs projects to save bits of Web history that appear likely to be lost. As of the April 2 shutdown, roughly 9 am US/Pacific time, the Archive Team's pull was 98.5% complete (by profiles listed), and 90%+ of G+ Communities were also saved (allowing for paginated perusal of these). We prefer connections from many public IP addresses if possible. https://plus.google.com/104092656004159577193 (that's my old G+ UUID / profile page). If you don't understand part or any of this or the referenced documentation, and cannot get up and running by yourself, we'll manage without you. The Archive Team's pull was 98.5% complete (by profiles listed), and 90%+ of G+ Communities were also saved (allowing for paginated perusal of these). Private posts, and any previously deleted content will not be saved. I've frequently seen my pages turning up in Japanese, for instance. These are a bit more cumbersome than I'd like but there are extensions for all major browsers as well as iOS and Android which allow interactions with the Internet Archive, including a "save page now" feature. weg li momb german It's possible to save items directly to the Internet Archive by other mechanisms. https://ws-dl.blogspot.com/2019/02/2019-02-08-google-is-being-shuttered.html, See: r/plexodus/comments/az285j/saving_of_public_google_content_at_the_internet/. Though often known for its Web archives, the "Wayback Machine", it also preserves texts, audio, video, software, and other formats. This page was last edited on 22 November 2021, at 20:56. The original HTTP headers and IP address are needed for the WARC file. This was the largest single Archive Team project to date, and represents ~10% of the total Internet Archive holding as of 2012. If you believe your country implements censorship, do not run a warrior. For the most part, contributing to the Internet Archive is strongly encouraged, as they do the heavy lifting, but Archive Team has its own smaller contributions project: People with access to large-scale storage and high-bandwidth network connections are especially appreciated. The Archive Team works closely with, but is not affiliated with the Internet Archive. Self-supporting volunteers are appreciated. Should I conclude that I'm not one of the lucky 98.5% to have their posts archived by this project? The archiving of public Google+ content to the Internet Archive by the ArchiveTeam has has begun. Archive Warriors volunteer their time, resources, and services, there is no compensation. (The Internet Archive and Archive Team are generally not interested in your helpful comments and/or suggestions about alternative technologies, unless you're exceptionally qualified on the matter.). A meeting and information-exchange point for proprietary Internet service users migrating elsewhere. Note that Warrior utilises a specially [Lua](https://en.wikipedia.org/wiki/Lua_(programming_language)-instrumented version of wget to produce WARC images, a standard developed by the Internet Archive and very widely adopted, as the Library of Congress link above indicates. The group maintains Deathwatch and Fire Drill lists of sites or platforms thought to be in peril or of significance. There are a set of Wayback Machine APIs which can test for archives of a known URL. never know when this information might come in handy. If you do want this to happen you're in luck.

No VPNs. Archiving your cafe's wifi service agreement repeatedly is not helpful. There may be other ways for searching for or accessing content on the Wayback Machine, and we'll add information as we receive it. ._3wvjcIArtO7kKPJabZfZ9S{font-family:Noto Sans,Arial,sans-serif;font-size:12px;font-weight:400;line-height:16px;-ms-flex-align:center;align-items:center;color:var(--newCommunityTheme-metaText);display:-ms-flexbox;display:flex;margin-right:24px;opacity:0;transition:opacity .1s ease-in-out}._3wvjcIArtO7kKPJabZfZ9S._1c98ixuh4QUWO9ERiFID3p{opacity:1}.RtAsN7UrR7u51W5kaOXvp{display:-ms-flexbox;display:flex;margin-left:4px;margin-right:0}._1JRtpiobR4jYtbw-xx1tPO{border:1px solid var(--newRedditTheme-body);margin-left:-7px;transform:scaleX(-1)}._1JRtpiobR4jYtbw-xx1tPO:nth-child(2){margin-top:4px} No proxies. Results should appear in the Wayback Machine over coming weeks. You'll need a graphical browser. If your bandwidth is shared, limited, or metered, you can specifically limit bandwidth usage through the virtual machine, see instructions for specific VMs. This group thinks big. Thanks for your work. Total profiles archived are 50 batches * 1,000 sitemaps/batch * 680 items/sitemap * 100 profiles/item = 3.4 billion profiles, or the total number of Google+ profiles (as of March, 2017). _This_ is not your new home, but we may help you and your community find or decide on where it will be. One should treat it like a entity of civilization. There are additional information, instructions, troubleshooting, and guidance at the Archive Team Warrior Wiki page: https://www.archiveteam.org/index.php?title=ArchiveTeam_Warrior, https://github.com/ArchiveTeam/googleplus-grab. Google+ allows up to 500 comments per post, but only presents a subset of these as static HTML. I'm a bot, bleep, bloop. For example, to save the Google+ Mass Migration Community homepage, at https://plus.google.com/communities/112164273001338979772, you'd use: This can be scripted for both individual and large-scale batch archival. 100 Mb/s+ or better is recommended. Fair point, though the bash script is more portable, and has worked, at scale. The browser extensions above can simplify this for you. And, unlike those, this actually has a meaningful purpose. I've tried searching by individual post URL and by my profile URL. Since 2009 this variant force of nature has caught wind of shutdowns, shutoffs, mergers, and plain old deletions - and done our best to save the history before it's lost forever. Hey, this might be a little late but, how exactly do you search up your own profile in the Wayback Machine? There are a set of requirements for your Internet connection itself. How does this affect you as a Google+ user? TL;DR: Most public Google+ content should live on at the Internet Archive thanks to a fanatical bunch of volunteers, and you can help. Again, using DuckDuckGo (especially when setting this as your default browser), you can access pages directly using the !wayback bang search, entered before the URL in your browser's Navigation bar. There's a (not particularly up-to-date) Wiki page largely consisting of Google's shutdown announcements: https://www.archiveteam.org/index.php?title=Google%2B, The actual archive code lives on GitHub: https://github.com/ArchiveTeam/googleplus-grab, The more interesting project tracker, showing updates in realtime, is: http://tracker.archiveteam.org/googleplus/. There's an IRC channel on EFnet: irc://irc.efnet.org/#archiveteam, And a subreddit: https://old.reddit.com/r/Archiveteam/. The Internet Archive is a digital library with the stated mission of "universal access to all knowledge". If it is not found there will be a dialogue on the result page offering to "save page now". Don't delete your Google+ content or profile and it should be saved. Think of the Wayback Machine as the Web's attic, or basement, or storage locker. How can I specifically access archived content later? Archive projects run using a tool called "Warrior", based on "grabber" scripts, which run in a virtual machine (VirtualBox, VMWare, or other virtualisation systems) on a desktop or server system. No ISP connections that inject advertisements into web pages. Skills and understanding to run all of this. Past projects include Mozilla Addons, Tindeck, and UOL Forums (the "Brazillian AOL"), whilst present projects include Flickr and Tumblr, as well as several manual projects. There is an ArchiveTeam Googleplus collection: https://archive.org/details/archiveteam_googleplus. to the Archive Team for taking this on, the Internet Archive for hosting it, and to Fusl for answering all my pesky questions over the past few hours on the details of processing. A Virtual Machine server, including VirtualBox, VMWare, Docker and Hyper-V. A sufficiently high-bandwidth connection. you can just add https://web.archive.org/*/ to the head of the URL: https://web.archive.org/*/https://plus.google.com/104092656004159577193. The !wayback bang search will search for a page in the Internet Archive. A lot of google plus content has been already deleted :(. Contents should appear in the Wayback Machine over coming weeks. If you don't want this to happen, you can request removal of specific items through the Internet Archive's procedure: https://help.archive.org/hc/en-us/articles/360004716091-Wayback-Machine-General-Information and https://help.archive.org/hc/en-us/articles/360018138951-How-do-I-remove-an-item-page-from-the-site-. We've been sharing information and planning over the past few months, including the copious information we've collected on Google+ size, activity, profiles, communities, and characteristics of the site and platform. There is also a !save bang but this is broken and does not work.

Tools such as curl, wget, GET, or console browsers including lynx, links, elinks, w3m, etc., can not access archived content directly. It costs the Archive about $2,000 to host 1 terabyte of data. This will apply mostly to high-def image and video content, though photographers may want to be aware. Whelp guess it is time to dust off my Archive Warrior VM. https://help.archive.org/hc/en-us/articles/360014755952-Archive-org-Information. "Items" are sitemap subsets of 100 profiles, and 50 batches of 1,000 sitemaps at a time, each with about 680 or so items, will be processed over the course of this archival. We estimate that permanent storage costs us approximately $2.00US per gigabyte. Archive Team became aware that Google+ was shutting down in December of 2018. I'm using xargs here to run ten simultaneous requests from the file gplus-urllist: I've run this on over 10,000 URLs over a modest residential broadband connection in a hair over two hours. If you wish to solicit donations on your own, you may do so. This is far less taxing for the planet when compared to crypto and NFTs. Don't you think it s a bad idea for the planet ? This is independent of the Archive Team's GooglePlus project and does not affect either the content they collect or the fetchlist compilation. Command-line use of the Internet Archive is limited as the site now depends on JavaScript. Methods may be appropriate for single items or large-scale (100s, 1,000s, or 1,000,000s) of requests. From a given Wayback Machine page, you can generally search for all pages under some specific URL. Note that this shows only 1/50th of the total project at a time. This is of mixed use for Google+ content: Google+ post URLs are given by userID or novelty URL, so you should be able to search for all content by a specific user or Google+ Page profile. So long as requests are legitimate, they are actively encouraged by the Archive. /*# sourceMappingURL=https://www.redditstatic.com/desktop2x/chunkCSS/UsersCountIndicator.25121495b567ea821734_.css.map*/https://archive.org/donate. Data integrity is a very high priority for the Archive Team so use of VPNs with the official crawler is discouraged. No censorship. It's not clear that long discussion threads will be preserved. When you launch the Warrior you'll be presented with a list of current projects. No ISP DNS that redirects to a search page.

If you have the technical resources and skills, run a Warrior instance, You can request or add specific URLs to the Internet Archive directly, Using Internet Archive browser extensions. This has been reported to DDG, though it's not yet fixed. Use non-captive DNS servers. The server may return an error page instead of content if they ban exit nodes. Fusl, arkiver, and Jason Scott are awesome. Archive Team have previously saved other social media site content, and have several on their watchlists, including larger sites such as YouTube, Facebook, CodeAcademy, LiveJournal, Reddit, Twitter, WikiLeaks, and Wikipedia. Lots of potentially important and historically relevant discussions may have taken place on G+. Single pages may be saved by navigating to https://web.archive.org/ and entering the URL into the "Save Page Now" form (should be on the right side of the page). (For example, if your apartment building uses a single IP address, we don't want your apartment banned.). Also when using it on a page that exists, it redirects to the login page. Alternatives are !waybackmachine and !wbm. You might consider recommending https://github.com/pastpages/savepagenow instead of creating a bash alias for calling Wayback's save API with curl. It costs the Archive about $2,000 to host 1 terabyte of data: ._1W1pLIfaIb8rYU6YeTdAk6{margin-right:6px}._1H6Meh6ZAemKxOJDOEasfK{border-radius:50%;box-sizing:border-box;height:20px;margin-left:-8px;width:20px}._1H6Meh6ZAemKxOJDOEasfK:first-child{margin-left:0}._30vlMmCcnqKnXP1t-fzm0e{display:inline-block;margin-left:-8px;position:relative} Select the "Googleplus" project to archive Google+ content. Content archival is subject to the rate at which the project can proceed and any limitations imposed outside its control. Press question mark to learn the rest of the keyboard shortcuts. I mean, saving date cost a lot of energy. If there is a /u/0 or higher number (generally 0 or 1, may number into the many thousands, possibly millions) in the pattern, remove it: https://plus.google.com/u/0/104092656004159577193 => https://plus.google.com/104092656004159577193. There will be 34 million items, total, in the overall process. Contributions can be made in the way of funds or volunteering services, particularly as an archive Warrior, running an archive instance yourself.

Maybe an affluent user of G+ becomes a notable figure for great or terrible things in which case their old G+ posts would come in highly valuable. There are a few limitations to this project: Only public content that is presently available on Google+ is being included. There are tools to assist with rebuilding websites based on Wayback Machine archives. The Wayback Machine has no intrinsic way of knowing what content belongs to what Collection or Community. Plus a general knowledge of this particular facet of society may play an albeit tiny role in future understanding and interpretation of this time period in history. See: "If you See Something, Save Something", listing extensions for Chrome, Firefox, Safari, iOS, Android, and a Javascript Bookmarklet.

What does this mean, how does this affect you, and what can you do? Historically they have not been. Archive Team is a loose collective of rogue archivists, programmers, writers and loudmouths dedicated to saving our digital heritage. Presently there's no straightforward way, though the Archive Team's "Warrior" archiver effectively does this. If you know the URL of the item, you can request it directly from the Internet Archive. Wikipedia has a good article on the Internet Archive, https://www.archiveteam.org/index.php?title=Google%2B, http://tracker.archiveteam.org/googleplus/, https://help.archive.org/hc/en-us/articles/360004716091-Wayback-Machine-General-Information, https://help.archive.org/hc/en-us/articles/360018138951-How-do-I-remove-an-item-page-from-the-site-, https://en.wikipedia.org/wiki/Lua_(programming_language)-instrumented, https://plus.google.com/communities/112164273001338979772, Saving of public Google+ content at the Internet Archive's Wayback Machine by the Archive Team has begun, Saving of public Google+ content by the Archive Team has begun, The Internet Archive is working to preserve public Google+ posts before it shuts down https://www.theverge.com/2019/3/17/18269707/internet-archive-archiveteam-preserving-public-google-plus-posts, https://plus.google.com/104092656004159577193. Google+ Collection, Community, and some other selections, are not indicated by URL. No free cafe wifi. The G+MM / Plexodus effort became aware of Archive Team in January of 2019. Someone has linked to this thread from another place on reddit: [r/archivists] Saving of public Google+ content at the Internet Archive's Wayback Machine by the Archive Team has begun, [r/datahoarder] Saving of public Google+ content at the Internet Archive's Wayback Machine by the Archive Team has begun, [r/digital_manipulation] Saving of public Google+ content at the Internet Archive's Wayback Machine by the Archive Team has begun, [r/googleplus] Saving of public Google+ content at the Internet Archive's Wayback Machine by the Archive Team has begun, [r/hackernews] Saving of public Google+ content by the Archive Team has begun, [r/ploos] Saving of public Google+ content at the Internet Archive's Wayback Machine by the Archive Team has begun, [r/technology] The Internet Archive is working to preserve public Google+ posts before it shuts down https://www.theverge.com/2019/3/17/18269707/internet-archive-archiveteam-preserving-public-google-plus-posts, If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads. A desktop, server computer, or "cloud" hosted system(s).

In particular, archival from regions defaulting to another language may result in the Google+ site content (but not post or comments) being in a different language. Vanity URLs did NOT get preserved. Whether or not these will support Google+ user, Page, Collection, or Community accounts is not presently clear, though we'll try to provide information as it becomes available. (Info / ^Contact). Or is there a better way to find my collection of posts in WM? Press J to jump to the feed. It will take some weeks for the archived data to appear. Generally you'd find a profile by the unique account ID. How can i save all user posts using his/her profile url? (Previously saved content that's since been deleted will be available.). FAQ, https://blog.google/technology/safety-security/expediting-changes-google-plus, https://support.google.com/plus/answer/1045788, https://support.google.com/plus/answer/9217723, https://support.google.com/plus/answer/1045788#communities, https://support.google.com/plus/answer/9217723#signin, https://support.google.com/plus/answer/9217723#blogger, https://support.google.com/a/answer/6208960, https://cloud.google.com/blog/products/g-suite/new-enterprise-grade-features-in-googleplus-help-businesses-drive-collaboration, https://developers.google.com/+/api-shutdown, https://www.reddit.com/r/googleplus/comments/9nph98/google_mass_migration_community_on_g_helping, https://social.antefriguserat.de/index.php/Main_Page, https://www.blog.google/technology/safety-security/project-strobe/, https://www.blog.google/technology/safety-security/expediting-changes-google-plus/, https://wiki.archiveteam.org/index.php?title=Google%2B&oldid=47906. If you want to save a large number of URLs, or save them from a command line, you can use a specific URL format to do so: Where is the page you want to save. Does it make sense to keep this kind of information in this case ? More on Warrior below under "What can you do?". Results: 98.5% of Profiles, 90%+ of Communities. Please do NOT run a Warrior instance if any of the following apply: No OpenDNS.

When I search the Wayback Machine for my G+ posts, I find a few, but not nearly the full set. Wikipedia has a good article on the Internet Archive. If you do absolutely nothing, there is a very good chance that much of your public Google+ content will be preserved by Archive Team, on the Internet Archive, and will be publicly visible there. 2019-02-08: Google+ Is Being Shuttered, Have We Preserved Enough of It? The Friends+Me Google+ Exporter should provide this capability shortly.