This morning I was chatting on Google Hangouts with my brother, currently living in South Korea. When I left the house, I just picked up the conversation on my phone, but I realized something that hadn't dawned on me before. The sound it makes is not the sound I set back when I first started using hangouts. When did that change? My best guess is that this changed when I Google finally bestowed upon us the ability to set custom sounds for our hangouts conversations.
If Facebook is the "move fast and break things" company, then Google is the "test everything" company. Google hangouts had so much hype around it. A place to unify all our communications. A single place to have merged conversations. The hype has us believing it would give us a single communication platform. The reality has been anything but. Almost immediately people realized the lack of functionality in it. You couldn't even set custom sounds per person. I'm sure I'm not the only one who sent feedback almost immediately. And it took quite some time for that feature to roll out. WHAT TOOK SO LONG?
This singular feature isn't just a one off case either. I feel like every feature users request takes 6 months to a year to roll out. Google moves with the speed of a tortoise. Google Hangouts' group chats still don't feel properly integrated. Try using voice to send a message to a group. We only recently got the ability to do this with individual hangouts. I hope Google is very much regretting its deal with Vidyo in the creation of hangouts. What used to be an open platform of Google chat (XMPP behind it) is now a very much closed system that is slow to add functionality to.
Maybe at some point Google will realize its folly but I suspect we'll have a long time to wait for that too.
Tantek coined a phrase this past week that I wanted to share with everyone. Denial of Productivity Attack. This is simply a way of describing when people troll your group and try every which way they can to prevent you from getting any real work done. Sometimes its making asinine statement to incite argument. Other times its continually bringing up a subject that has already been discussed and dismissed by the group. The question is how best do you deal with such a person. Letting them drag you in to argument after argument is certainly a mistake, as they win. The solution I think is to call bullshit when you see it and just move on. Give as terse a response as you can that will shut down their argument and go back to getting real work done. Most times they will likely just ignore any faults in their own logic. It can be a challenge for sure. The light at the end of the tunnel is that most people are pretty good at detecting bullshit themselves. All one need do is call out the bullshit and then others will see the gaps in their logic, the dodges to other subjects, the outright lies. So there is a light at the end of the tunnel.
I told Ann that I would write up some of my thoughts on why deleting other's posts via any sort of social web api is not needed. So here it is in my bleary eyed state.
There is a bit of arguement about how groups should work in federated social web. Realistically there are 3 key concerns, independant sites, public silos, and private company networks. One of the key points of the indieweb has been control of your own data, so while groups might merge all of these users together, its of particular importance to indieweb that our own posts cannot be deleted by anyone else. I believe this same policy can work just fine at an API level in group interactions. Lets start with the indieweb site creating a group, however that happens. If another indieweb member joins and posts, they are posting their content on their own site, and then a copy is just propogated to the group. Likewise they are free to pull in the original group conversation back to their own site. If we abstract this to say that one of those indieweb sites to have multiple people within it, this works just fine. Go one step further and it is the equivalent of a silo. Between and indieweb site and a silo, data can move exactly the same way, no indieweb site can delete a post that exists within a silo, they can only delete their own post and tell the silo the post was deleted. You can swap a company network in for these public silos and again, it will work exactly the same.
With a "group", the question comes in the form of deleting the entire group. We see in facebook that deleting a group removes all posts within that group. But in the indieweb, the oposite would be wanted, deleteing a group would do nothing to the original post. This is actually not a conflict at all when you consider that within a silo, anything can be done. A silo could choose to delete all posts of its user's when a group gets deleted. A silo cannot delete any indieweb user's content, but the silo can remove all reference to it. While on the indieweb site, the copies of posts from the silo users might still exist on the indieweb site but all links to the silo copies of these posts would be gone. The basic thing is that within a silo, there is no limits on how the internal implementation handles this.
Again this all applies equally to any sort of business setting. If there is a group project between two companies, neither company should be able to delete the posts of the employees of the other company. Also each company would prefer to keep copies of anything posts from the other company, so as not to lose context to the comments made. Any attempt to force deletion of content from the other will ultimately be futile as there is always a way to save anything that is viewable by someone. Inside of a company's system there could be complex systems for deciding who can post, where, how data is archived, approval processes, etc. It is still all internal implementation, and not an issue for the API.
I have successfully managed to get my micropub to twitter bridge software working today. The basic idea is this, I log in to mpTweet, authorize mpTweet to publish on my behalf, then I log in to mpTweet and allow it to register itself with my site. It sends over a token and registers itself as a micropub syndication endpoint. Thats it! Now i get a new option in my list of syndicate-to values in all my apps. Whenever I use this option the micropub endpoint on my site will post to my site then immediately resend this data (along with the url) to mpTweet, which in turn posts it to twitter. Just as a regular micropub endpoint, this returns the URL of the syndicated copy, which can now be displayed on my site. The main goal is to get it so that I can publish to any micropub server as a syndication of my site. Actually, i can do this right now I suppose, I just need to manually insert the token to my site, and the syndication target won't necessarily know to treat the post as a syndicated copy. But its simple and cool. Now to make a facebook version... eep.
Not exactly. As I understand it silo.pub is to use micropub to post directly to a silo. While a minor bit of hackery could get that working mpTweet is to syndicate to twitter via micropub rather than webmention. I think it will be really useful for syndicating alternate content as now my site controls exactly what gets published, not the service.
Building out of yesterday's post I have seen the problem with most all of the JSON-LD crowd. They want to map JSON-LD to EVERYTHING. They will tell you to how popular JSON-LD and refer you to a list of sites that are just data stores. Marking up data to share between organizations is great. I'm all in favor of that. NASA is sharing via JSON-LD. Cool. But thats a far cry from social website. Nothing social is in that list. A format for data transfer would work, sure, but its not the easiest way, and its not the more user or developer friendly way. Developer friendly systems let you add pieces and support as you need it. For large system's thats fine because its an "add it once and you are done" type of situation, but for a decentralized system there are going to be a lot of different implementations. You need something simple to get people to bother with it.
Academics always want to have everything uniform. So many specs for data start out clean but end up so messy specifically because they try to cover literally everything. Schema.org is often made fun of for its "fax number for a volcano". And it's a perfectly valid point. They want perfect uniformity in all locations and since a location can have a fax number and any landmass is location, then oceans and volcanos get confusing fields. This isn't really the worst part however. The worst part is that they try to specify "everything you will need" which means anything outside of the standard will need some messy extensions system. To add a single field like last active eruption, you need an extension which requires a separate file to describe that extension, and developers are dissuaded from this extra work. At that point a non semantic method of markup is much easier. Honestly schema.org is only useful because Google says they read it this format for their quick answers to things. Which means developers only use it where they need it. If you don't need to do so, why add a complex mess to your code.
Today I heard of something I had not heard of before. A challenge. Write 100 words a day for 100 days. I honestly have no clue how I am going to do at this, but I am going to try. It may well end up as random babbling by the end. This suits me rather well after I had something of an epiphany when trying to write up a post recently. I realized I hate my writing style. I get side tracked in explanations or just random thoughts, and in the end, its just a mess. It doesn't feel terrible but it's not great. If I wouldn't read it, I can't really expect others to.
No, though I pretty much never put a title for a note. Actually the main difference between notes and articles for me is how they are drafted. Notes are plain text only, written as plain text, etc. Articles are much longer form, and written in an HTML editor. I can also add a horizontal line to get it to know where to break for the shortened (stream) version and the full text.
They certainly would. My test implementation is actually a twitter posting app. https://github.com/dissolve/mptweet All I really have left to do is work out the handshake of sending a key to my sites micropub endpoint and then the micropub receiver on mpTweet.