http://news.xbox.com/2013/06/update
Official source^
Printable View
http://news.xbox.com/2013/06/update
Official source^
Crazy, they had to really.
It just annoys me how 'easily' they have just done a total 180. Like werent they saying they cant get past the whole "login" once a day? Now they just remove it instantly? And all that DRM?
Sounds a bit like Sim City...
Well, it's only a software patch that's allowing this. You'll have to download and install an update before you're able to do this.
So I'm gonna guess they'll start turning it back on within the year.
The irony here is that Sony is the one with the track-record for bait-and-switch, like what happened with the Linux-on-PS3 ordeal and, to after a fashion, the requirement for PS+ to play online.
Microsoft, on the other hand, has been relatively up-front about what they've offered even if it sucks to hear.
http://i.imgur.com/JZOCnud.png
The internet is exploding from this news. Sorry MS, we still know that you fucked up big time, and you're backpedalling like incompetent idiots after your elaborate DRM plans.
LOOOOOOOOLLLL
Xboneighty
Microsoft needed to just push how their excellent sharing features (that Steam is now taking from them) worked, and everyone would have pulled that steaming hot lead pipe out of their ass.
This news is extremely disappointing.
People spoke with their wallets, Microsoft probably didn't reach the pre-order numbers they were hoping for, and they listened. How is that disappointing ? Yes the sharing thing is cool but if it comes packed with all that region locking and DRM bull I'd rather not have it all together. Steam isn't stealing anything as far as I'm concerned as they will (hopefully) do it the right way without filling holes with garbage.
Llama that feature IS cool. Nobody is arguing that. It's the list of other things that come with that feature that have people annoyed.
I haven't seen anything saying they are removing the Kinect requirement. Until that happens there is no way they will be getting my business.
Introducing the Xbox 540!
E: Microsoft next year:
http://i.qkme.me/3pwtqn.jpg
The problem is, that feature shouldn't really exist in the first place. They added in DRM and restricted what you can do. They, then, change the parameters of the DRM to give you more permissions in one way, and less in another.
"Yay! That's so cool!" except for the exceptions, like nuclear submariners, "Crap this won't work for me!"
"But this DRM change makes me so happy!" ... "But this DRM is unplayable for me!"
o.o It doesn't benefit either of you! ... It just gets in the way less than it did for some, and more than it did for others.
You then see companies like CD Projekt, makers of The Witcher and GoG, selling games DRM-free... or I should say... never adding DRM to games... because they built their marketing strategy around "being the source of good, untainted games".
You can take a copy of the upcoming big-budget "The Witcher 3", install it on your machines, let a friend borrow it (probably not allowed in the license... but who knows, they might end up buying a copy for themselves), back it up, keep it in a museum for generations because there's no DRM servers to get shut down, and so forth.
Im talking solely about this E-library concept, not the DRM behind it. As in, I have a copy of Bioshock, and instead of driving to my friend's house and giving him the disk, he can just download and play it as if I had loaned it to him because he's on my "family" list and the system sees I have the game. Although, I can see how that would cause problems if some authentification server went down.
You could already kind of work the system with the 360. If somebody installs, say, MW2 onto their hard drive, boots the game (which requires the disk in the drive, but then it stops using the disk), you can then lift the disk out of the drive and put it in another xbox. I never tried to see if both xboxes could play with each other, but it did work for offline play. I guess it's also worth noting that the cover was off of my DVD drive, since pressing the eject button would immediately kill the game and go to dashboard (so youd literally lift the cd out of the exposed tray).
Suddenly that "connection every 24 hours" thing makes sense. Without that, somebody could "borrow" a single player game while their friend keeps the physical disk, and just stay offline to play indefinitely without the disk. I guess at that point the question becomes what's the (financial) difference between that and just loaning him a physical copy?
E: then again, it still doesn't really make sense because it's a pain in the ass for anybody not using the sharing function.
What the fuck, trying to comprehend this is making my head hurt. I think I'll just stick to PC gaming where everyone buys their own copy at 75% off.
They can't really pull it out; it's part of the firmware. The thing won't pass a POST without successfully getting data from the Kinect.
Not really. Even now, they're doing the opposite of what needs to be done.
When introducing drastically different changes that completely alienates a percentage of customers, you need to maintain the current method then include the change as an opt-in system with rewards.
Allowing offline-use is great, but if they're completely axing away the Live features they've listed, they're going about it completely wrong. I mean, we're talking about if-statements on auth here.
Yeah, you can add a title to your Live Library then send the disk to an offline only console, but is that really such a worry for Microsoft? Is offline-piracy really something to be afraid of? The "pirate" in this case of not-so-good-license-management (you know they'd label these offliners as pirates) is taking no resources from the Live server farm and in no way impacting other users of the service. It wouldn't even be that hard to check disk hashes (they made a patent on this I believe) to ensure a game isn't being doubled up in a LAN environment.
The only reason I can possibly think of for Microsoft to be so outraged over the idea that the DRM can be pushed off with a software update is that it is not software controlled but through firmware locked to the motherboard.
Chances of me getting the XOne have gone up, though I'm still going to be weary until reviews start to come out and no major problems are there. Plus, still waiting for some good games to get for it. Only thing I'd get right now would be Ghosts (fuck off, it looks like a neat game and I'm eager to see this change in direction) and maybe AC4. Everything else is coming out next year.
Still, fucking congrats to Microsoft for taking the feedback and knocking the DRM away.
Guess what I did today?
Went to gamestop and preordered a PS4 and Watch Dogs.
I had a few regrets today, but this is not one of them.
The DRM is not removed, it is just re-parametrized. Undid their less-restrictive sharing permissions, redid their disk-based authentication.
Weakening DRM would be like if Steam did their sharing thing WITHOUT compromising anything with their offline mode, etc.
"Removing" (never adding) DRM is something like CD Projekt does, launching The Witcher 3 DRM-free on GoG (which, admittedly, is a bigggg advertisement for their service).
Unconfirmed rumors, but...
The "family sharing plan" was just glorified game demos. Haha, what shit.
I know it's unconfirmed but this is so ironic it has to be true.Quote:
I will admit that I was not happy with how some of my fellow colleagues handled explaining the systems and many times pulled my hair out as I felt I could have done a better job explaining and selling the ideas to the press and public at large.
But that's almost always how it goes with marketing...
Yeah Microsoft did a fantastically shitty job of revealing and explaining all the features they have, because it really would turn the tides considerably and shove the PS4 into the dust.
For examble, this is why Titanfall isn't on the PS4 but is on the XOne:
http://www.respawn.com/news/lets-tal...ox-live-cloud/
Essentially, this Cloud is turning out to be amazing. Developers have so much versatility when it comes to them, since the servers can be used for a dedicated server, cloud computing, or probably tons more little nifty things I'm sure we have yet to find.
Reads more like an Azure marketing piece than a genuine explanation of why it's not on the PS4.
Hundreds of thousands of servers, sure, I'll believe that. :-3
To be fair, the Xbone has potentially more processing horsepower at its disposal with Microsoft's new XBL cloud. Less intense or less time-sensitive tasks can be offloaded to the cloud so the local hardware can focus on the more demanding bits. Theoretically, this should have allowed them to save on the per-unit cost of the console itself. At $450, the Xbone would become much more palatable relative to the PS4.
Yeah, I've read people saying this but really, what's it going to offload to The Cloud™? You can't do anything that's latency sensitive (so nothing real-time), anything critical to the single player game or anything that's going to use much bandwidth.
At this point I don't see it as anything but a marketing gimmick to try to downplay the PS4's hardware advantage. Expect to see completely mundane features that we've had for years now only being possible thanks to Azure.
http://dilbert.com/dyn/str_strip/000...0498.strip.gif
I said potential. Right now, it's not practical to offload heavy, time-sensitive stuff. That will change in the future as broadband gets better. What you can stream at this very moment, however, would be the level geometry itself. Textures can get processed on the box since that's what you really see, while the mesh can be calculated remotely. Perhaps skyboxes, or NPC interactions. How about storing all of the markers that are necessary to maintain a large, persistent world? Heck, Microsoft may not even know everything that they can do with the servers and Azure yet, it's there partly so developers can explore this new resource.
And the PS4 does not have that much of a hardware advantage. It just doesn't. Yes, it has more stream processors, but the GDDR5 advantage is made up for by software tomfoolery on the Xbone. I really don't think multi-platform games are going to look any better on the PS4 than they do on the Xbone.
Hey, you guys remember playing H2X/H2V on Live?
The cloud. Yet another out-of-band service that will break forced functionality when the next gen is phased out. I wonder if developers have to pay monthly fees for the Azure bullshit as well (meaning if the studio or support goes under, so does the cloud).
You know what else runs on Azure? H4. Hopefully the shit service with H4 was just a learning phase.
The Cloud: because developers don't get any sunlight, why should their software? :mech2:
Okay, that's enough cloud turbulence for one morning.
I lied. You know what other over hyped, mystical idea exists "in the cloud"? "Heaven". :mech2:
Nonononono. You would never want to run anything that directly affects a local simulations each frame.
The cloud is such a bullshit term.
This is a server and nothing more. You want to use it for player versus player interaction and that's about it. Single-threaded capability on these servers are going to be the most power-efficient thing microsoft can afford to maintain. I'm actually surprised they're considering using some of them for on-demand dedicated servers for various games.
The games that use it for more than multi-player server hosting will be akin to MMOs. They'll have a disclaimer on the box making it clear to the player exactly what it is. As long as you know what you're getting into, I don't see a problem with this. I am a pretty anti-cloud person, but you'd have to be blind to ignore the benefits; it's rather disingenuous to only preach about its pitfalls.
I also fail to see how static level geometry affects a local simulation each frame if it doesn't even move. All you need to know is that there 's terrain there so your collisions interact properly. NPC interactions, i.e. Mass Effect conversations, don't really have a time demand, and a skybox does nothing but look pretty...
They might not have a 'time demand' but they don't have a computational demand either, therefore making them pointless to process remotely.
That is not a true statement. If it were, then polygon counts wouldn't matter.
Quote:
If it were, then polygon counts wouldn't matter.
They only matter for the actual rendering, which happens 60+ times a second and is done locally on the GPU so I still fail to see how the cloud would be of benefit.
Simulation-level stuff (ie your physics colliders, render geometry (arguable, but depends on specific engine architecture and game's design), game code that directly interacts with those two prior things such as weapons fire checking what material was hit for particles/sounds/hit info/etc) would be hung up on the latency to obtain a serialized version. Even if you had amazing ping (less than one millisecond), that ping would take up a significant fraction of a frame. If you get to more accurate pings (especially in the united states) where a good low ping is 33ms or lower (lets be reasonable here it's usually somewhere under 200ms) then you've waited the equivalence of two 60Hz frames for your serialization.
Networked game code that doesn't affect the simulation frame by frame would be fine since you can predictably handle latency. So your 'ME conversations' query to the server to see "what's hot and what should be talked about like cool hip dudes" would be fine since you can check frame by frame if a pointer to the data is valid yet without breaking anything on the simulation side. If you 'start the conversation', you ask the server for the conversation. While you're waiting to see if you even get a response, you can have the simulation carry on without any worry.
Titanfall managed to get away with AI-bots(autopilot Titans) in their game because their gameplay completely relies on a server authoritative method to serialize everything. Want to walk forward? Server says that's fine. Want to walk more forward? Well, server says that you're running into a wall at this position, but you can keep trying. DayZ would work fine, considering it completely relies on the server for persistence. WoW or EVE would be fine as well.
Titanfall's server-authoritative method is the only way this hybrid cloud is going to work.
A better way to look at it would be that your avatar is remotely controlled by you via the internet in the remotely calculated environment, and the console renders the textures or other bits that would be really hard to transmit over an internet connection without issues such as excessive popping. All the console is doing now is effectively drawing a matte painting over top of what would otherwise be a grey clay render (if even that) using the information contained in the packet to set the boundaries. It's not doing 3D. It doesn't have to figure out what the player's position is, where the lights are; it simply gets told all of that and executes the texture, which will be stored locally as a base and modified according to the lighting information received. Voxels might actually be a good fit for this, using the cloud information as a sort of skeleton to be magnetically attached to by the local hardware. Glorified pixel art, but damn pretty pixel art.
This may not work for fast-paced games at first, but that doesn't matter because not all games are fast-paced. They are not even all shooters.
This is, really, all uncharted territory. Even Microsoft themselves said they are excited to see what uses developers come up with for their new cloud services. I don't expect any current engine technologies to really be flexible with the cloud system. What I described above is most definitely not a conventional raster engine.
That's not how it works with games now and would actually be a step back.
It's pretty much all or nothing.
Commit everything to the cloud (onLive failed miserably btw) or keep it to the local client.
It's not about where the processing power is, it's about how long away it is. That time is crucial.
OnLive got its funding as an IPO only because of their strict testing environment which pretty much equated to requiring your ISP to have direct backbone access to where their datacenters were.
Learn some cpp and start digging into an engine to get a better idea of how a game actually runs. Unity or CE3 should do.
As it stands, you say cloud without really comprehending the mechanics behind it.
Wall-o-text incoming:
Microsoft's new XBL cloud is a large number (300,000+) of servers using the Azure framework (I assume you know what that is) that are used to host your typical XBL features but are supposedly also capable of doing pretty much anything the developer asks Microsoft permission to do, i.e. streaming content. Events need to be synchronized between client and server, and you need a system in place to make the effects of latency invisible to the client; this generally involves compression and minimizing the amount of data that needs to be transmitted and synchronized by choosing what gets computed where. OnLive chose to have everything about the game computed on their side so all the user received were the audio and video streams while all OnLive had to receive were user inputs and ID checks. What Microsoft is proposing is that developers can split the computing task between two machines by intelligently choosing which tasks are capable of being done remotely with minimal impact by latency.
Did I miss anything? It's straight-forward enough. There are all those details of what to store where, which side performs what task, but that is all dependent on the task you are trying to perform.
I'm familiar with C++ and Python by necessity (yay school), enough to be able to follow what I'm looking at, though it's not my forté. I actually do understand the basics of how a current game works and is drawn; what I'm getting at, though, is that what you know about how a game currently works is not necessarily applicable to a game tailored to a hybrid remote-local computing solution. You could all but forget things like LOD if you offload to the remote render farms: you have your geometry calculated and rendered remotely with identifiers so your machine knows which textures to employ where and with what lighting so it can paint by numbers when it receives a video feed. It's 3D on the server's end, but all your machine is doing is largely 2D work. By removing the colour information from the transmission, you cut down on packet size; you could even let the 3D world have a low resolution and use the identifiers and 2D painting to mask it with the player none the wiser. It's a new concept (and I may actually have it backwards with which side does what), and that's really what makes this exciting: it's new territory. There is nothing on the market, to my knowledge, that can currently take advantage of this hybrid solution out of the box. Microsoft doesn't even have a system in mind, though it was them who said that it was capable of letting developers use their servers to offload some of the work. And, what's even better, is that what they suggest we can do with it now is not going to be all we can ever do; the console hardware is stuck at what it is while internet connections and the Xbox Live back-end can and will be constantly improving.
Really, the Xbox One is a much more exciting package than the PS4. Sure, Sony could theoretically do all of the above but from what I've read, they don't already have the software and hardware infrastructure in place to do anything like what Microsoft is suggesting the Xbox One can do. I, like you, prefer to have my games all rendered and stored locally, but that doesn't make the potential any less cool. Yes, it's more efficient to render everything server-side, but I think the hybrid idea they are putting forth is a way to get around the supposed bandwidth issue (also local storage/horsepower issues) and to slowly ease people into a world where all of their software is provided as a service. While I'm appalled at the latter thought, the former is neat.
By the way, OnLive also didn't really fail because of bandwidth issues (I tried it, it worked just fine), it failed because it didn't offer people anything that they couldn't already do at a price that made it worth switching. It wasn't convenient enough. Anybody who could afford a sub to OnLive, and didn't want to play on the PC, had a console already and with a larger library of games and no requirement to pay a recurring fee. It could have had a promising future on handheld mobile devices if carriers didn't all price their data rates through the stratosphere; I would have subbed for that (though I'd rather be able to stream games from my already capable home PC) and I loathe subscription-based business models.
There was a time people thought it wasn't feasible to put the memory controller on the CPU, there was also a time where people thought offloading the video processing to a dedicated controller was not worth it.
Both of those assumptions have since been proven wrong, with that I'm reasonably sure as connections speed up offloading work to "cloud" will be worth doing more and more.
Quote:
It's 3D on the server's end, but all your machine is doing is largely 2D work. By removing the colour information from the transmission, you cut down on packet size; you could even let the 3D world have a low resolution and use the identifiers and 2D painting to mask it with the player none the wiser. It's a new concept
You lose too much information if all your doing is sending 2D data to the client, paint by numbers isn't going to look good and you might as well just do the paint by numbers on the server which then just becomes another streaming service like onLive. Everything is pushing toward higher resolution why would you want to make it a lower resolution (will probably be worse off than the current generation).
Unless I can play without an always-on audio/visual recording device pointed at me I still won't even consider buying one.
( ͡° ͜ʖ ͡°)
@Freelancer:
Well me neither, but still.
@Skyline:
Because your local hardware can't necessarily do high-resolution geometry AND high resolution textures AND high fidelity lighting, that's why. The numbers can be more or less densely packed, and you'd have an algorithm that smooths out the final image. The whole point is that you transmit only the bare minimum information needed to draw that 2D image; but you first need to calculate the detailed image before you can decide where to put your info points. This is why I said voxels (which I still maintain are an illusion) might become a thing again. I've seen voxel animation, and the only reason it looked bad was because the game came out in the mid-90s and PCs simply didn't have the horsepower. We now have the horsepower.
Going back to "cloud processing", you could also do things like re-render cubemaps or lightmaps while destruction happens and blast them out to all applicable clients before they're needed.
The problem is that most of this stuff could have just been crammed in spare cycles of a few light-load frames. I don't know...
It's all about the detail. If you cram it into the spare cycles during a light load frame, you have to make do with whatever elbow room you have at that moment so you lower your fidelity target as a precaution against overreaching. If you offload it, latency becomes your main concern but that's far more consistent than the local hardware's dynamic load. You can therefore potentially set your targets higher and even scale them with the connection speed.
Personally I think the bandwidth will only really be able to be sustained by ridiculous services such as google fiber. HOORAY Google fiber.
That's not hard to do... :aaaaa:
I think his question is whether or not all this new technology will actually improve the core game play of a game, or if it's all just eye candy. In between my Bad Company and Chivalry sessions the past couple of days, I've played Chulip for the PS2, replayed Pikmin on the gamecube, and am now replaying Pikmin 2. Before that I was playing Banjo Tooie. So how far are these graphics going to go in making me want to replay these new games like that? Because as neat as graphical advancement is, it's not HD textures and lighting that keep me playing a game.
I think the primary use for their new cloud outside of the core features will be to store content that doesn't need to go on the disk, allowing you to have a bigger world without taking up more space on the drive.
Cloud computing for graphical simulations (like rendering lighting, LOD etc) doesnt make any sense to me. Microsoft is only in it for the money. How much would these cloud computing servers cost them? How much are they to run/maintain/cost of engineers? There becomes are point where the cost effectiveness dips, making it pointless business wise to them.
I envisage them using cloud computing for data simulation purposes. For example you have a persistent MMO, the player might log off but the cloud can continue to run the simulations. They could be used to hammer out really intensive algorithms and spit the output to everyone connected and store the data for people who are offline, when they log back in they can resync their game data and they won't have lost any progress. This method allows a users console to basically only focus on their local environment, and would give more availability of the computing power to graphical calculations.
tl;dr Cloud computing to run non-graphical calculations and simulations, freeing the console up to work on graphics.
That's actually what I just said above you, only in a more succinct fashion. The world gets stored remotely, including those algorithms, and so your console just does graphics work while the dynamics are streamed to you. MMOs are a great example. Heck, even the level data could be piped to you for rendering in pieces if the world is large enough; you have half the continent on the disc/HDD, and then you stream another quarter when you get there while it deletes the locations less-visited by you. Like programs that get pre-loaded into RAM when they are used often but get removed when they are no longer accessed with frequency.
If you look at it from a "computing as a service" viewpoint, it all makes sense. Microsoft wants to get everybody paying for Xbox Live and, if they can get away with it, for the extra services that aren't included in that $60/year fee. If the computing service they provide is cheaper to execute than the price people are willing to pay for it, they'll do it. Since this new network of theirs is supposedly flexible enough to do basically anything you want, we'll just have to see how the logistics and economics play out.
What's new here? MMOs have always worked this way. The only difference is that a cluster of servers is now called a 'cloud'. I can't view many MMO developers being keen to use third party cloud services anyway. A loss of control coupled with a good chance of increased costs doesn't seem very attractive.
As for streaming level data, there's no point when disk space is cheaper and easier to upgrade than bandwidth. I'd be pretty surprised if that changed any time soon.
Except disk space isn't unlimited and, as far as I know, the Xbone has a fixed hard drive. Besides, the world may get bigger after the initial launch of the game.
Disk space is cheap but this isn't about expenses, it's about convenience. It's not convenient to have to manually go through and uninstall games to make room for another game expanding. A live system removes that need, and people who subscribe to Gold would probably appreciate the feature. At the very least, it would give some legitimacy to Xbox Live.
http://i.imgur.com/ReXwrfs.jpg
Yeah, this is what happens when your press conference tells people things and your hidden agenda doesn't.
So what game(s) are we getting for July?
Some tower defense game on the 5th.
I've been an Xbox guy since Halo came out. I'm leaning towards the PS4 for the next gen. Is that crazy or do you guys think I'm making the right move? Let me know, thanks....
Depends on the games you're interested in, I suppose. For me, since I don't care about Halo, the better hardware at a lower price gives it my support. That and my forgiveness isn't so readily granted.
I preordered my PS4 after E3 was over
:D
Yeahhhhh. Just preordered PS4 bundle with BF4 on Amazon. Looking forward to it very much so. I know it's not backwards compatible for disks, but do you guys think they'll have a lot of PS3 games available on a marketplace eventually? Wanting to try out MGS4, Uncharted, Last of Us, etc. at some point.
Pretty sure it's not backwards compatible, period. If they offer and PS3 games it will probably be through an OnLive type service.
So this here is pretty neat. Distributed lighting between server and client via the internet. And it uses, wait for it, voxels! This is basically lighting by numbers. Who called it? I called it. Go to hell.
Unfortunately for Xbone, it's running AMD hardware.
I don't think anybody said it couldn't be done, just that it probably wasn't feasible at this point. With a quick glance at the paper (I'll read it all when I have time), this technique still makes you pay for the latency (hello real-time) and a very hefty bandwidth toll that'd make online games unplayable on most connections.
Fuck off. You not see that last part of the video or something?
Latency was the reason I said was why you don't render the simulation with data not from the client.
The moment you tie dynamic lighting into something players control is the moment that 100ms latency becomes input lag and you know damn well how much gamers despise input lag.
This has its uses in the LAN, but that's it. On a regular networked game, the latent lighting from a flashlight means you see a player come around the corner before his lighting catches up with him and you can not see around the corner until your flashlight catches up with you.
Except it looked pretty good even at those very high latencies, even with the jumpy light. That's what compensatory algorithms can do for you. And how many gamers play with 500-1000 ms latency rates? Anybody serious enough to care about that little bit of lag is not going to be playing on such an awful connection. Not only that, but the direct lighting isn't what this is really about, it's the diffuse. As in, the stuff that's going to actually use up the local resources at a greater rate when it tries to model where the photons are scattering to instead of simply being told where they are and then mapping it. Finally, this doesn't necessarily have to be tied to a player-controlled item, it can be environmental. The goal is offloading some of the work, not making everything controlled remotely.
So get real, you are pulling it out of context and seem to have a narrow idea of what this stuff can and will be used for. You said it wasn't workable and Nvidia just demonstrated that it is.
Oh yeah, I'm definitely out of context with a narrow idea of what it can be used for even though I've said it's perfectly fine in a LAN setting where latency is low/non-existant while disastrous for interactive things like games where lag is incredibly noticeable.
You're posting in a FUCKING XBOXONE THREAD about something that would be a disaster if implemented on something the console was using while saying I have no clue what I'm talking about. You've admitted that you have no programming experience on this. Not like I've actually coded things reliant on serialization for games or anything oh wai.....
You're worse than 9mm ever was. He even admitted to never reading things he talks about and you haven't gotten there yet.
If you actually did read what you brag about you'd know that this can indeed work with the xbone. Their goal was to see if the hardware environment could support such a thing and it did. However, they've stated that latency is indeed the weak point in the system and they don't expect it should be implemented for anything beyond a few network hops. Do I really need to go over how many hops are likely to take place along the player's home, local ISP, high tier host, the XBL datacenter, and then the compute farm itself?
Seriously, the whole research project just confirmed that the hardware today has enough bandwidth to be a light farm for 50 or so local clients and remain competitive in quality you'd expect for higher-end PC hardware. This is something that would push light from your PC to your nVidia Shield.
I don't see why this is great. Your streaming a crticial part of the engine from a serverbox over a internet connection, just for ligthing. What good possible uses would this have? Maybe some clearer, sharper graphics, but is that really neccessary nowadays? You're getting latency just for maybe a slightly noticable difference if it was all being done local.
http://www.youtube.com/watch?v=lbrmAsxJPv4
Stupidest idea ever putting a native wireless receiver on the console.
BRB, going to an MLG LAN and haxxoring the consoles in the middle of matches.
It comes with a sticker, so it's clearly the better console.
I don't know if this has been mentioned yet, but it would seem that they are using a proprietary connection for the Kinect to XBone. I knew something like that would happen. For shame. :(
Edit: Here, I found a source:
http://arstechnica.com/gaming/2013/0...t-work-on-pcs/
The old Kinect had a proprietary connector. The USB adapter was only there because older Xbox 360s didn't have that proprietary connection port, only USB. They don't want people using the new one on Windows like last time, they want them to pay the $400 for the Windows version. Of course, I'd be willing to bet that some will simply splice a USB connector onto the Xbone version.
Gonna laugh if this is the case, but are the two connectors on the different versions the same?
I WAS about to say that the XBOne Kinect was just going to be using something like a firewire connection or Thunderbolt, but I realized that they are using USB 3.0 and there's no point in them mixing the two types (I figure it would be cheaper for them to just use all USB 3.0's if possible).
I'm going with Zeph on this one. I give it a week to a month from release of XBone that someone splices a non-proprietary connection onto the sucker. If I'm wrong? IDK. Let's take bets.
Probably end up finding out that it is indeed just a USB connection, again. It's not as if they're going to invent their own standard, surely.
http://www.engsoc.org/~pat/log/xbox/20041002.jpg
Image resize functionality is broken. Oh well.
Are you fucking happy now? Kinect is No Longer Mandatory Because Everyone is a Whiny Bitch
Why this is Bad for Everyone
Quote:
"It's simple: As a standard-issue, always-on-and-enabled FEATURE of the Xbox One, Kinect was something that had a hugely better chance (not a guarantee, but a good chance) of being developed for in a meaningful way....But as an optional feature, there's far less incentive for developers to take advantage of everything the new Kinect can do. It's actually a really impressive device, and maybe some of that still comes through. There's a good chance it does, actually. It's just much lower than it would have been otherwise."
Pretty much, these cool features that the Xbox One had are now being destroyed (although the always-online was maybe a little too early to have since Wifi isn't that globally stable yet). I don't understand why everyone seems to hate Kinect though, it seems like a really awesome device. Demos and all.
Just because you don't need it plugged in doesn't suddenly mean that it isn't being bundled with every console. You can unplug your controllers and drives too but it doesn't mean developers are compelled to make them optional accessories. :ugh:
Besides, I'm yet to be impressed by any Kinect applications and it isn't exactly new by this point.
because not everyone wants a camera staring at them the entire time they're playing a video game and having said camera decide how much to skewer prices when it comes to renting movies and shows based on how many people are in the room.
Bit-coin-currency22 pretty much nailed it. If you're going to bitch about people bitching, at least try and hear them out first before...well bitching.
Making it mandatory to even fucking use the Xbox was fucking stupid. If I'm not using the goddamn thing to do anything, why must I have the damn thing plugged in?
The Kinect's connector is also seemingly proprietary. So no hacking it up on the PC (apparently the Xbox version is essentially subsidized, the PC version to come later IIRC will be priced in the hundreds).
In other news, I pondered the idea of switching my preorder of the PS4 to Xbone today while at gamestop (thanks in part to the many 180s they've pulled, the new controller, my disgust with Sony's PSN, and some future upcoming work). Apparently neither system is no longer preorder-able. So yeah, decided to keep my order for the time being.
But why is it so hard to just leave it plugged in, even when not in use? It's not going to do anything, and it'll just help if you want to use any voice commands or recognize entering players. Sure, I guess it's good now that if you break it, you don't have to go and buy a new one necessarily, but if you're smart and place it on a stable surface it shouldn't break in the first place.
Because of this:
Also, you're from the US right ODX? I'm sure you've heard of the whole thing about the NSA assembling a gigantic portfolio of internet usage for every american. I can't think of too many people that would be very excited to add the xbox's kinect camera to that. There's nothing hard about leaving it plugged in. It's the fact that you have (or had) to for the unit to even function. Seems suspicious.
...Did anyone even mention the kinect breaking?
No? then what does that have to do with the price of cheese?
The bones of contempt lay in A: the kinect is always watching, always listening, B: mandatory, now it's no longer mandatory I no longer have to wonder whether some NSA jerk is making crude photoshop mustaches on video feed taken from the kinect whilst i'm gaming.
The breaking part was one of the only good reasons I can think of for it not being mandatory anymore, so if it does break your console isn't rendered useless.
Oh god, the NSA thing. Don't get me started. All I can say is my personal opinion is that it doesn't freaking matter if someone sees me staring at a TV screen playing games. But that's another thread.
If your biggest argument against buying the new xbox is the NSA, you might wanna just throw your cellphone away, get rid of your pc, and go hide in a bunker. We all carry phones with front facing cameras, if they wanted to spy, it wouldn't be with your console. Jesus.
So what was the excuse for leaving it always on then? I can't think of any other reason other than the pay-per-person movie pricing, which seems difficult if not impossible to even enforce.
Hence why I said it
At that point, why even bother? Honestly, I don't think Microsoft had some global evil genius plan with making the Kinect always on. It really sounds like they were trying to promote the use of the Kinect by ensuring developers that people would always have one, but like everything else, they went about it the wrong way. They could have just as easily (and much less controversially) said "Hey guys, all Xbox One units come with the Kinect sensor, that way developers can integrate it into their games knowing everybody will have one!" They didn't have to pull a big brother on us.
When microsuck has been implemented as a willing paticpate in the biggest infringement on privacy in our history, as well as providing day 0 exploits to government agency's to exploit diplomatic computers from other nations, i think it's wise to not be indifferent to a fucking camera and microphone in your living room. on at all times.